Compare commits

...

65 Commits

Author SHA1 Message Date
Kevin Veen-Birkenbach
0a6c2f2988 Release version 0.9.1 2025-12-10 22:56:04 +01:00
Kevin Veen-Birkenbach
0c90e984ad Refine setup workflows and add architecture map
- Split virgin tests into separate root and user GitHub Actions workflows
  (test-virgin-root, test-virgin-user) and adjust Arch container flows
- Introduce scripts/installation/venv-create.sh and reuse it from
  scripts/installation/main.sh with separate root/system and user/dev paths
- Add PKGMGR architecture & setup map (assets/map.png) and section in README
  with link to the up-to-date master page
- Simplify README by removing outdated Docker quickstart, usage examples,
  and AI footer
- Extend .gitignore to exclude src/source artifacts

https://chatgpt.com/share/6939bbfe-5cb0-800f-8ea8-95628dc911f5
2025-12-10 22:51:40 +01:00
Kevin Veen-Birkenbach
0a0cbbfe6d fix(init-nix): create 'nix' user with a valid shell across all distros
The init-nix.sh script previously hardcoded /usr/bin/bash as the login shell
for the 'nix' user, which exists on Arch but not on Debian. This caused the
Nix single-user installer (run via `su - nix`) to fail silently or break in
unpredictable ways on Debian-based images.

We now resolve the shell dynamically via `command -v bash` and fall back to
/bin/sh on minimal systems. This makes Nix installation deterministic across
Arch, Debian, Ubuntu, Fedora, CentOS and CI containers.

https://chatgpt.com/share/6939e97f-c93c-800f-887b-27c7e67ec46d
2025-12-10 22:43:20 +01:00
Kevin Veen-Birkenbach
15c44cd484 Removed deprecated pkgmgr.yml 2025-12-10 21:34:33 +01:00
Kevin Veen-Birkenbach
6d7ee6fc04 Fix test scripts: ensure default distro and always run via bash
- Remove Makefile inline variable export (distro=arch) and invoke scripts via bash
- Add robust default in test-unit.sh and test-integration.sh:
    : "${distro:=arch}"
- Prevent "unbound variable" errors under `set -u` when no distro is provided
2025-12-10 21:09:18 +01:00
Kevin Veen-Birkenbach
5a022db0db Use dynamic distro selection for UNIT and INTEGRATION tests
- Pass `distro=arch` from Makefile into test scripts
- Replace hardcoded "arch" references with "${distro}"
- Update test-unit.sh and test-integration.sh to use dynamic image names
- Improve log output to reflect selected distro

https://chatgpt.com/share/6939c98a-d428-800f-8bb8-cf72e80ba80c
2025-12-10 20:27:03 +01:00
Kevin Veen-Birkenbach
37ac22e0b4 test: isolate Nix store/cache per distro to fix cross-distro manifest conflicts
- Replace shared Nix volumes with distro-specific volumes
  (pkgmgr_nix_store_<distro>, pkgmgr_nix_cache_<distro>)
- Prevent incompatible profile manifest versions between Ubuntu and Debian
- Update all test scripts (unit, integration, container, e2e)
- Remove unused global Nix volume variables from Makefile
- Improve consistency of test-e2e.sh formatting and environment handling
- Add Git safe.directory configuration for mounted /src to avoid ownership warnings
2025-12-10 20:07:41 +01:00
Kevin Veen-Birkenbach
bcea440e40 Fix path and shell repo directory resolution + add unit/E2E tests
- Introduce `_resolve_repository_directory()` to unify directory lookup
  (explicit `directory` key → fallback to `get_repo_dir()` using base dir)
- Fix `pkgmgr path` to avoid KeyError and behave consistently with
  other commands using lazy directory resolution
- Fix `pkgmgr shell` to use resolved directory and correctly emit cwd
- Add full E2E tests for `pkgmgr path --all` and `pkgmgr path pkgmgr`
- Add unit tests covering:
    * explicit directory usage
    * fallback resolution via get_repo_dir()
    * empty selection behavior
    * shell command cwd resolution
    * missing shell command error handling
2025-12-10 19:47:26 +01:00
Kevin Veen-Birkenbach
6edde2d65b Release version 0.9.0 2025-12-10 18:38:10 +01:00
Kevin Veen-Birkenbach
74189c1e14 Add virgin Nix flake E2E workflow and update .gitignore
- Introduce `test-nix-flake-e2e.yml` workflow to run a full Arch-based virgin
  environment test with Nix flakes enabled and shared Docker caches
- Ensure pkgmgr self-installation and flake-based installer path are exercised
- Update .gitignore with additional build artifacts, Debian packaging files,
  and pkgmgr output directories
2025-12-10 18:37:29 +01:00
Kevin Veen-Birkenbach
b5ddf7402a Release version 0.8.0 2025-12-10 17:32:00 +01:00
Kevin Veen-Birkenbach
900224ed2e Moved installer dir 2025-12-10 17:27:26 +01:00
Kevin Veen-Birkenbach
e290043089 Refine installer capability integration tests and documentation
- Adjust install_repos integration test to patch resolve_command_for_repo
  in the pipeline module and tighten DummyInstaller overrides
- Rewrite recursive capability integration tests to focus on layer
  ordering and capability shadowing across Makefile, Python, Nix
  and OS-package installers
- Extend recursive capabilities markdown with hierarchy diagram,
  capability matrix, scenario matrix and link to the external
  setup controller schema

https://chatgpt.com/share/69399857-4d84-800f-a636-6bcd1ab5e192
2025-12-10 17:23:33 +01:00
Kevin Veen-Birkenbach
a7fd37d646 Add unit tests for install pipeline, Nix flake installer, and command resolution
https://chatgpt.com/share/69399857-4d84-800f-a636-6bcd1ab5e192
2025-12-10 16:57:02 +01:00
Kevin Veen-Birkenbach
d4b00046d3 Refine installer layering and Python/Nix integration
- Introduce explicit CLI layer model (os-packages, nix, python, makefile)
  and central InstallationPipeline to orchestrate installers.
- Move installer orchestration out of install_repos() into
  pkgmgr.actions.repository.install.pipeline, using layer precedence and
  capability tracking.
- Add pkgmgr.actions.repository.install.layers to classify commands into
  layers and compare priorities.
- Rework PythonInstaller to always use isolated environments:
  PKGMGR_PIP override → active venv → per-repo venv under ~/.venvs/<identifier>,
  avoiding system Python and PEP 668 conflicts.
- Adjust NixFlakeInstaller to install flake outputs based on repository
  identity: pkgmgr/package-manager → pkgmgr (mandatory) + default (optional),
  all other repos → default (mandatory).
- Tighten MakefileInstaller behaviour, add global
  PKGMGR_DISABLE_MAKEFILE_INSTALLER switch, and simplify install target
  detection.
- Rewrite resolve_command_for_repo() with explicit Repository typing,
  better Python package detection, Nix/PATH resolution, and a
  library-only fallback instead of raising on missing CLI.
- Update flake.nix devShell to provide python3 with pip and add pip as a
  propagated build input.
- Remove deprecated/wip repository entries from config defaults and drop
  the unused config/wip.yml.

https://chatgpt.com/share/69399157-86d8-800f-9935-1a820893e908
2025-12-10 16:26:23 +01:00
Kevin Veen-Birkenbach
545d345ea4 core(command): implement explicit command=None bypass and add unit tests
This update introduces Variant B behavior in the command resolver:

- If a repository explicitly defines the key \"command\" (even if its value is None),
  resolve_command_for_repo() treats it as authoritative and returns immediately.
  This allows library-only repositories to declare:
      command: null
  which disables CLI resolution entirely.

- As a result, Python package repositories without installed CLI entry points
  no longer trigger SystemExit during update/install flows, as long as they set
  command: null in their repo configuration.

The resolution logic is now bypassed for such repositories, skipping:
  - Python package detection (src/*/__main__.py)
  - PATH/Nix/venv binary lookup
  - main.sh/main.py fallback evaluation

A new unit test suite has been added under
  tests/unit/pkgmgr/core/command/test_resolve.py
covering:

 1) Python package without installed command → SystemExit
 2) Python package with installed command → returned correctly
 3) Script repository fallback to main.py
 4) Explicit command overrides all logic

This commit stabilizes update/install flows and ensures library-only
repositories behave as intended when no CLI command is provided.

https://chatgpt.com/share/69394a53-bc78-800f-995d-21099a68dd60
2025-12-10 11:23:57 +01:00
Kevin Veen-Birkenbach
a29b831e41 Release version 0.7.14 2025-12-10 10:38:36 +01:00
Kevin Veen-Birkenbach
bc9ca140bd fix(e2e): treat SystemExit(0) as successful CLI termination in clone-all test
The pkgmgr proxy layer may intentionally terminate the process via
SystemExit(0). The previous test logic interpreted any SystemExit as a failure,
causing false negatives during `pkgmgr clone --all` E2E runs.

This patch updates `test_clone_all.py` to:
- accept SystemExit(0) as a successful run,
- only fail on non-zero exit codes,
- preserve diagnostic output for real failures.

This stabilizes the clone-all E2E test across proxy-triggered exits.

https://chatgpt.com/share/69393f6b-b854-800f-aabb-25811bbb8c74
2025-12-10 10:37:40 +01:00
Kevin Veen-Birkenbach
ad8e3cd07c Updated CHANGELOG.md 2025-12-10 10:28:20 +01:00
Kevin Veen-Birkenbach
22efe0b32e Release version 0.7.13 2025-12-10 10:27:27 +01:00
Kevin Veen-Birkenbach
d23a0a94d5 Fix tools path resolution and add tests
- Use _resolve_repository_path() for explore, terminal and code commands
  so tools no longer rely on a 'directory' key in the repository dict.
- Fall back to repositories_base_dir/repositories_dir via get_repo_dir()
  when no explicit path-like key is present.
- Make VS Code workspace creation more robust (safe default for
  directories.workspaces and UTF-8 when writing JSON).
- Add unit tests for handle_tools_command (explore, terminal, code) under
  tests/unit/pkgmgr/cli/commands/test_tools.py.
- Add E2E/integration-style tests for the tools subcommands' --help
  output under tests/e2e/test_tools_help.py, treating SystemExit(0) as
  success.

This change fixes the KeyError: 'directory' when running 'pkgmgr code'
and verifies the behavior via unit and integration tests.

https://chatgpt.com/share/69393ca1-b554-800f-9967-abf8c4e3fea3
2025-12-10 10:25:29 +01:00
Kevin Veen-Birkenbach
e42b79c9d8 Add E2E tests for 'clone --all' and 'update --all' using HTTPS mode
This commit introduces two new end-to-end integration tests:

  • tests/e2e/test_clone_all.py
      Runs: pkgmgr clone --all --clone-mode https --no-verification
      Verifies that full HTTPS cloning of all configured repositories
      works inside the test container environment.

  • tests/e2e/test_update_all.py
      Runs: pkgmgr update --all --clone-mode https --no-verification
      Ensures that updating all repositories with HTTPS mode completes
      successfully without raising exceptions.

Both tests:
  - Provide extended diagnostics on SystemExit
  - Reuse nix-profile cleanup helpers for consistent test environments
  - Validate that `pkgmgr --help` works after execution

These tests complement the existing shallow-install integration test
and improve overall reliability of HTTPS clone/update workflows.
2025-12-09 23:47:43 +01:00
Kevin Veen-Birkenbach
3b2c657bfa Release version 0.7.12 2025-12-09 23:36:38 +01:00
Kevin Veen-Birkenbach
e335ab05a1 fix(core/ink): prevent self-referential symlinks + add unit tests
This commit adds a safety guard to create_ink() to prevent creation of
self-referential symlinks when the resolved command already lives at the
intended link target (e.g. ~/.local/bin/package-manager). Such a situation
previously resulted in broken shells with the error:

    "zsh: too many levels of symbolic links"

Key changes:
  - create_ink():
      • Introduce early-abort guard when command == link_path
      • Improve function signature and formatting
      • Enhance alias creation messaging

  - Added comprehensive unit tests under:
        tests/unit/pkgmgr/core/command/test_ink.py
    Tests cover:
      • Self-referential command path → skip symlink creation
      • Standard symlink + alias creation behaviour

This prevents pkgmgr from overwriting user-managed binaries inside ~/.local/bin
and ensures predictable, safe behaviour across all installer layers.

https://chatgpt.com/share/6938a43b-0eb8-800f-9545-6cb555ab406d
2025-12-09 23:35:29 +01:00
Kevin Veen-Birkenbach
75f963d6e2 Removed tests/e2e/test_install_all_shallow.py 2025-12-09 23:18:49 +01:00
Kevin Veen-Birkenbach
94b998741f Release version 0.7.11 2025-12-09 23:16:48 +01:00
Kevin Veen-Birkenbach
172c734866 test: fix installer unit tests for OS packages and Nix dev shell
Update Debian, RPM, Nix flake, and Python installer unit tests to match the current
installer behavior and to run correctly inside the Nix development shell.

- DebianControlInstaller:
  - Add clearer docstrings for supports() behavior.
  - Relax final install assertion to accept dpkg -i, sudo dpkg -i, or
    sudo apt-get install -y.
  - Keep checks for apt-get update, apt-get build-dep, and dpkg-buildpackage.

- RpmSpecInstaller:
  - Add docstrings for supports() conditions.
  - Mock _prepare_source_tarball() to avoid touching the filesystem.
  - Assert builddep, rpmbuild -ba, and sudo dnf install -y commands.

- NixFlakeInstaller:
  - Ensure supports() and run() tests simulate a non-Nix-shell environment
    via IN_NIX_SHELL and PKGMGR_DISABLE_NIX_FLAKE_INSTALLER.
  - Verify that the old profile entry is removed and both pkgmgr and default
    flake outputs are installed.
  - Confirm _ensure_old_profile_removed() swallows SystemExit.

- PythonInstaller:
  - Make supports() and run() tests ignore the real IN_NIX_SHELL environment.
  - Assert that pip install . is invoked with cwd set to the repository
    directory.

These changes make the unit tests stable in the Nix dev shell and align them
with the current installer implementations.
2025-12-09 23:15:56 +01:00
Kevin Veen-Birkenbach
1b483e178d Release version 0.7.10 2025-12-09 22:57:11 +01:00
Kevin Veen-Birkenbach
78693225f1 test: share persistent Nix store across all test containers
This commit adds the `pkgmgr_nix_store` volume mount (`/nix`) to all test
runners (unit, integration, container sanity checks, and E2E tests).

Previously only the Arch-based E2E container mounted a persistent `/nix`
store, causing all other distros (Debian, Ubuntu, Fedora, CentOS, etc.)
to download the entire Nix closure repeatedly during test runs.

Changes:
- Add `-v pkgmgr_nix_store:/nix` to:
  - scripts/test/test-container.sh
  - scripts/test/test-e2e.sh (remove Arch-only condition)
  - scripts/test/test-unit.sh
  - scripts/test/test-integration.sh
- Ensures all test containers reuse the same Nix store.

Benefits:
- Significantly faster test execution after the first run.
- Prevents redundant downloads from cache.nixos.org.
- Ensures consistent Nix environments across all test distros.

No functional changes to pkgmgr itself; only test infrastructure improved.

https://chatgpt.com/share/693890f5-2f54-800f-b47e-1925da85b434
2025-12-09 22:13:01 +01:00
Kevin Veen-Birkenbach
ca08c84789 Merge branch 'fix/branch-master' 2025-12-09 21:19:53 +01:00
Kevin Veen-Birkenbach
e930b422e5 Release version 0.7.9 2025-12-09 21:19:13 +01:00
Kevin Veen-Birkenbach
0833d04376 Improve branch helpers with main/master base resolution
- Update pkgmgr.actions.branch.open_branch() to resolve the base branch
  via _resolve_base_branch(), preferring 'main' and falling back to
  'master' when the preferred branch does not exist.
- Adjust the open_branch logic to:
  - fetch from origin
  - checkout the resolved base branch
  - pull the resolved base branch
  - create the feature branch
  - push the new branch with upstream tracking
- Add and refine unit tests in tests/unit/pkgmgr/actions/test_branch.py
  to cover:
  - normal branch creation with explicit name and default base
  - interactive name prompting when no name is provided
  - error handling when fetch fails after successful base resolution
  - fallback to 'master' when 'main' is missing.
- Clean up and clarify docstrings and comments for open_branch(),
  close_branch(), and _resolve_base_branch(), and fix the module header
  comment to match the new package path.

This fixes branch opening in repositories that still use 'master' as
their primary branch while keeping the default behavior for 'main'.

https://chatgpt.com/share/6938838f-7aac-800f-b130-924e07ef48b9
2025-12-09 21:16:10 +01:00
Kevin Veen-Birkenbach
55f36d76ec Merge branch 'fix/file-error' 2025-12-09 21:09:48 +01:00
Kevin Veen-Birkenbach
6a838ee84f Release version 0.7.8 2025-12-09 21:03:24 +01:00
Kevin Veen-Birkenbach
4285bf4a54 Fix: release now skips missing pyproject.toml without failing
- Updated update_pyproject_version() to gracefully skip missing or unreadable pyproject.toml
- Added corresponding unit test ensuring missing file triggers no exception and no file creation
- Updated test wording for spec changelog section
- Ref: adjustments discussed in ChatGPT conversation (2025-12-09) - https://chatgpt.com/share/69388024-93e4-800f-a09f-bf78a6b9a53f
2025-12-09 21:02:01 +01:00
Kevin Veen-Birkenbach
640b1042c2 git commit -m "Harden installers for Nix, OS packages and Docker CA handling
- NixFlakeInstaller:
  - Skip when running inside a Nix dev shell (IN_NIX_SHELL).
  - Add PKGMGR_DISABLE_NIX_FLAKE_INSTALLER kill-switch for CI/debugging.
  - Ensure run() respects supports() and handles preview/allow_failure cleanly.

- DebianControlInstaller:
  - Introduce _privileged_prefix() to handle sudo vs. root vs. no elevation.
  - Avoid hard-coded sudo usage and degrade gracefully when neither sudo nor
    root is available.
  - Improve messaging around build-dep and .deb installation.

- RpmSpecInstaller:
  - Prepare rpmbuild tree and source tarball in ~/rpmbuild/SOURCES based on
    Name/Version from the spec file.
  - Reuse a helper to resolve the rpmbuild topdir.
  - Install built RPMs via dnf/yum when available, falling back to rpm -Uvh
    to avoid file conflicts during upgrades.

- PythonInstaller:
  - Skip pip-based installation inside Nix dev shells (IN_NIX_SHELL).
  - Add PKGMGR_DISABLE_PYTHON_INSTALLER kill-switch.
  - Make pip command resolution explicit and overridable via PKGMGR_PIP.
  - Type-hint supports() and run() with RepoContext/InstallContext.

- Docker entrypoint:
  - Add robust CA bundle detection for Nix, Git, Python requests and curl.
  - Export NIX_SSL_CERT_FILE, SSL_CERT_FILE, REQUESTS_CA_BUNDLE and
    GIT_SSL_CAINFO from a single detected CA path.
  - Improve logging and section comments in the entrypoint script."

https://chatgpt.com/share/69387df8-bda0-800f-a053-aa9e2999dc84
2025-12-09 20:52:07 +01:00
Kevin Veen-Birkenbach
9357c4632e Release version 0.7.7 2025-12-09 17:54:41 +01:00
Kevin Veen-Birkenbach
ca5d0d22f3 feat(test): make unittest pattern configurable and pass TEST_PATTERN into containers
This update introduces a configurable TEST_PATTERN variable in the Makefile,
allowing selective execution of unit, integration, and E2E tests without
modifying scripts.

Key changes:
- Add TEST_PATTERN (default: test_*.py) to Makefile and export it.
- Inject TEST_PATTERN into all test containers via `-e TEST_PATTERN=...`.
- Update test-unit.sh, test-integration.sh, and test-e2e.sh to use
  `-p "$TEST_PATTERN"` instead of a hardcoded pattern.
- Ensure flexible test selection via:
      make test-e2e TEST_PATTERN=test_install_pkgmgr_shallow.py

This enables fast debugging, selective test runs, and better developer
experience while keeping full compatibility with CI defaults.

https://chatgpt.com/share/69385400-2f14-800f-b093-bb03c8ef9c7f
2025-12-09 17:53:10 +01:00
Kevin Veen-Birkenbach
3875338fb7 Release version 0.7.6 2025-12-09 17:14:22 +01:00
Kevin Veen-Birkenbach
196f55c58e feat(repository/pull): improve verification logic and add full unit test suite
This commit enhances the behaviour of pull_with_verification() and adds a
comprehensive unit test suite covering all control flows.

Changes:
- Added `preview` parameter to fully disable interaction and execution.
- Improved verification logic:
  * Prompt only when not in preview, verification is enabled,
    verification info exists, and verification failed.
  * Skip prompts entirely when --no-verification is set.
- More explicit construction of `git pull` command with optional extra args.
- Improved messaging and formatting for clarity.
- Ensured directory existence is checked before any verification logic.
- Added detailed comments explaining logic and conditions.

Tests:
- New file tests/unit/pkgmgr/actions/repos/test_pull_with_verification.py
- Covers:
  * Preview mode (no input, no subprocess)
  * Verification failure – user rejects
  * Verification failure – user accepts
  * Verification success – immediate git call
  * Missing repository directory – skip silently
  * --no-verification flag bypasses prompts
  * Command formatting with extra args
- Uses systematic mocking for identifier, repo-dir, verify_repository(),
  subprocess.run(), and user input.

This significantly strengthens correctness, UX, and test coverage of the
repository pull workflow.

https://chatgpt.com/share/69384aaa-0c80-800f-b4b4-64e6fbdebd3b
2025-12-09 17:12:23 +01:00
Kevin Veen-Birkenbach
9a149715f6 Release version 0.7.5 2025-12-09 16:45:45 +01:00
Kevin Veen-Birkenbach
bf40533469 fix(init-nix): ensure /nix is always owned by nix:nixbld in container root mode
In GitHub's Fedora-based CI containers the directory /nix may already exist
(e.g. from the base image or a previous build layer) and is often owned by
root:root. In this situation the Nix single-user installer aborts with:

    "directory /nix exists, but is not writable by you"

This caused the container build to fail during `init-nix.sh`, leaving no
working `nix` binary on PATH. As a result, the runtime wrapper
(pkmgr-wrapper.sh) reported:

    "[pkgmgr-wrapper] ERROR: 'nix' binary not found on PATH."

Local runs did not show the issue because a previous installation had already
created /nix with correct ownership.

This commit makes container-mode Nix initialization fully idempotent:

  • If /nix does not exist → create it with owner nix:nixbld (existing logic).
  • If /nix exists but has wrong owner/group → forcibly chown -R nix:nixbld.
  • A warning is emitted if /nix remains non-writable after correction.

This guarantees that the Nix installer always has writable access to /nix
and prevents the installer from aborting in CI. As a result, `pkgmgr --help`
works again inside Fedora CI containers.

https://chatgpt.com/share/69384149-9dc8-800f-8148-55817ece8e21
2025-12-09 16:33:22 +01:00
Kevin Veen-Birkenbach
7bc7259988 Release version 0.7.4 2025-12-09 16:22:03 +01:00
Kevin Veen-Birkenbach
66b96ac3a5 Refactor CI workflows and Makefile to unify container builds and simplify test execution
This commit updates all GitHub Actions workflows and the Makefile to ensure
consistent behavior across unit, integration, end-to-end, and OS-container
tests.

Changes include:

CI Workflows:
  - Rename workflows for clearer, more professional naming:
        * "Test Distribution Containers" → "Test OS Containers"
        * "Test package-manager (e2e)" → "Test End-To-End"
        * "Test package-manager (unit)" → "Test Units"
        * "Test package-manager (integration)" → "Test Code Integration"
  - Remove explicit build steps from workflows; container creation is now
    delegated to the Makefile via build-missing.
  - Restrict test jobs to only build the Arch test container by setting:
        DISTROS="arch"

Makefile:
  - Add build-missing as a dependency to all test targets:
        test-unit, test-integration, test-e2e, test-container
  - Remove redundant build-missing call from the combined 'test' target,
    since Make now ensures build-missing runs exactly once per invocation.
  - Preserve existing target structure while ensuring container images are
    built automatically on demand.

This makes the CI pipeline faster, more predictable, and removes duplicated
container build logic. All tests now use the same unified mechanism for
building missing images.
2025-12-09 16:18:15 +01:00
Kevin Veen-Birkenbach
f974e0b14a Release version 0.7.3 2025-12-09 16:08:34 +01:00
Kevin Veen-Birkenbach
de8c3f768d feat(repository): integrate ignore filtering into selection pipeline + add unit tests
This commit introduces proper handling of the `ignore: true` flag in the
repository selection mechanism and adds comprehensive unit tests for both
`ignored.py` and `selected.py`.

- `get_selected_repos()` now filters ignored repositories in all implicit
  selection modes:
    • filter-only mode (string/category/tag)
    • `--all` mode
    • CWD-based selection

- Explicit identifiers (e.g. `pkgmgr install ignored-repo`) **bypass**
  ignore filtering, so the user can still operate directly on ignored
  repositories if they ask for them explicitly.

- Added `_maybe_filter_ignored()` helper to handle logic cleanly and allow
  future extension (e.g. integrating a CLI flag `--include-ignored`).

Under `tests/unit/pkgmgr/core/repository`:

1. **test_ignored.py**
   • Ensures `filter_ignored()` removes repos with `ignore: true`
   • Ensures empty lists are handled correctly

2. **test_selected.py**
   Comprehensive coverage of the selection logic:
   • Explicit identifiers bypass ignore filtering
   • Filter-only mode excludes ignored repos unless `include_ignored=True`
   • `--all` mode excludes ignored repos unless explicitly overridden
   • CWD-based detection filters ignored repos unless explicitly overridden

Before this change, ignored repositories still appeared in `pkgmgr list`,
`pkgmgr status`, and other commands using `get_selected_repos()`.
This was unintuitive and broke the expected semantics of the `ignore`
attribute.
The new logic ensures ignored repositories are truly invisible unless
explicitly requested.

https://chatgpt.com/share/69383b41-50a0-800f-a2b9-c680cd96d9e9
2025-12-09 16:07:39 +01:00
Kevin Veen-Birkenbach
05ff250251 Release version 0.7.2 2025-12-09 15:49:01 +01:00
Kevin Veen-Birkenbach
ab52d37467 Refactor release helper into actions package and add RPM changelog support
- Move the monolithic pkgmgr/actions/release.py implementation into the
  pkgmgr.actions.release package, splitting concerns into versioning,
  git_ops and files helpers.
- Extend the release orchestration to update Fedora/RPM %changelog
  entries via update_spec_changelog(), reusing the same effective
  release message as for CHANGELOG.md and debian/changelog.
- Wire the new update_spec_changelog() helper into _release_impl() so
  every release keeps project, Debian and RPM metadata in sync.
- Add unit tests for update_spec_changelog() and for the updated release
  orchestration behaviour in preview and real modes.
- Remove the deprecated pkgmgr/actions/release.py module.

See ChatGPT discussion: https://chatgpt.com/share/6938368e-0940-800f-92d3-f2ccfddab794
2025-12-09 15:47:37 +01:00
Kevin Veen-Birkenbach
80329b85fb Release version 0.7.1 2025-12-09 15:26:56 +01:00
Kevin Veen-Birkenbach
44ff0a6cd9 Release version 0.7.0 2025-12-09 15:21:06 +01:00
Kevin Veen-Birkenbach
e00b1a7b69 Solved import bug 2025-12-09 15:03:31 +01:00
Kevin Veen-Birkenbach
14f0188efd Solved e2e naming bugs 2025-12-09 15:02:04 +01:00
Kevin Veen-Birkenbach
a4efb847ba Cleaned Up tests 2025-12-09 14:33:32 +01:00
Kevin Veen-Birkenbach
d50891dfe5 Refactor: Restructure pkgmgr into actions/, core/, and cli/ (full module breakup)
This commit introduces a large-scale structural refactor of the pkgmgr
codebase. All functionality has been moved from the previous flat
top-level layout into three clearly separated namespaces:

  • pkgmgr.actions      – high-level operations invoked by the CLI
  • pkgmgr.core         – pure logic, helpers, repository utilities,
                          versioning, git helpers, config IO, and
                          command resolution
  • pkgmgr.cli          – parser, dispatch, context, and command
                          handlers

Key improvements:
  - Moved all “branch”, “release”, “changelog”, repo-management
    actions, installer pipelines, and proxy execution logic into
    pkgmgr.actions.<domain>.
  - Reworked installer structure under
        pkgmgr.actions.repository.install.installers
    including OS-package installers, Nix, Python, and Makefile.
  - Consolidated all low-level functionality under pkgmgr.core:
        • git helpers → core/git
        • config load/save → core/config
        • repository helpers → core/repository
        • versioning & semver → core/version
        • command helpers (alias, resolve, run, ink) → core/command
  - Replaced pkgmgr.cli_core with pkgmgr.cli and updated all imports.
  - Added minimal __init__.py files for clean package exposure.
  - Updated all E2E, integration, and unit tests with new module paths.
  - Fixed patch targets so mocks point to the new structure.
  - Ensured backward compatibility at the CLI boundary (pkgmgr entry point unchanged).

This refactor produces a cleaner, layered architecture:
  - `core` = logic
  - `actions` = orchestrated behaviour
  - `cli` = user interface

Reference: ChatGPT-assisted refactor discussion
https://chatgpt.com/share/6938221c-e24c-800f-8317-7732cedf39b9
2025-12-09 14:20:19 +01:00
Kevin Veen-Birkenbach
59d0355b91 Release version 0.6.0 2025-12-09 05:59:58 +01:00
Kevin Veen-Birkenbach
da9d5cfa6b Fix container tests, unify RPM install path, and ensure Nix TLS truststore detection
Changes included:
• GitHub Actions workflow: rename job from 'test-unit' to 'test-container' to match intent.
• RPM packaging: replace %{_libdir}/package-manager with a fixed /usr/lib/package-manager
  to avoid lib/lib64 divergence on CentOS and ensure pkgmgr + Nix flake resolution works
  consistently across distros.
• Docker entrypoint: add automatic CA-bundle detection and set NIX_SSL_CERT_FILE to fix
  TLS issues on CentOS ('unable to get local issuer certificate') when Nix fetches flake
  inputs.

These updates stabilize container-based tests and unify the runtime environment
for Fedora, CentOS, and other distributions.

Reference:
ChatGPT conversation: https://chatgpt.com/share/6937aa72-d33c-800f-a63f-c353e92de6b3
2025-12-09 05:50:08 +01:00
Kevin Veen-Birkenbach
f9943fafae Refactor container build and installation pipeline to use configurable Makefile parameters (e.g. DISTROS, base images) and propagate them through all build, install, and test scripts 2025-12-09 05:31:55 +01:00
Kevin Veen-Birkenbach
7d73007181 Release version 0.5.1 2025-12-09 01:21:31 +01:00
Kevin Veen-Birkenbach
c8462fefa4 Release version 0.5.0 2025-12-09 00:44:16 +01:00
Kevin Veen-Birkenbach
00a1f373ce Merge branch 'feature/config_v2.0' 2025-12-09 00:29:19 +01:00
Kevin Veen-Birkenbach
9f9f2e68c0 Release version 0.4.3 2025-12-09 00:29:08 +01:00
Kevin Veen-Birkenbach
d25dcb05e4 Merge branch 'feature/branch_close' 2025-12-09 00:03:56 +01:00
Kevin Veen-Birkenbach
e135d39710 Release version 0.4.2 2025-12-09 00:03:46 +01:00
Kevin Veen-Birkenbach
76b7f84989 Release version 0.4.1 2025-12-08 23:20:28 +01:00
Kevin Veen-Birkenbach
8ea7ff23e9 Release version 0.3.0 2025-12-08 22:40:50 +01:00
202 changed files with 10588 additions and 4607 deletions

25
.github/workflows/test-container.yml vendored Normal file
View File

@@ -0,0 +1,25 @@
name: Test OS Containers
on:
push:
branches:
- main
- master
- develop
- "*"
pull_request:
jobs:
test-container:
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Show Docker version
run: docker version
- name: Run container tests
run: make test-container

View File

@@ -1,4 +1,4 @@
name: Test package-manager (e2e)
name: Test End-To-End
on:
push:

View File

@@ -1,4 +1,4 @@
name: Test package-manager (integration)
name: Test Code Integration
on:
push:
@@ -21,9 +21,5 @@ jobs:
- name: Show Docker version
run: docker version
# Build Arch test image (same as used in test-unit and test-e2e)
- name: Build test images
run: make build
- name: Run integration tests via make (Arch container)
run: make test-integration
run: make test-integration DISTROS="arch"

View File

@@ -1,4 +1,4 @@
name: Test package-manager (unit)
name: Test Units
on:
push:
@@ -22,4 +22,4 @@ jobs:
run: docker version
- name: Run unit tests via make (Arch container)
run: make test-unit
run: make test-unit DISTROS="arch"

64
.github/workflows/test-virgin-root.yml vendored Normal file
View File

@@ -0,0 +1,64 @@
name: Test Virgin Root
on:
push:
branches:
- main
- master
- develop
- "*"
pull_request:
jobs:
test-virgin-root:
runs-on: ubuntu-latest
timeout-minutes: 45
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Show Docker version
run: docker version
- name: Virgin Arch pkgmgr flake test (root)
run: |
set -euo pipefail
echo ">>> Starting virgin ArchLinux container test (root, with shared caches)..."
docker run --rm \
-v "$PWD":/src \
-v pkgmgr_repos:/root/Repositories \
-v pkgmgr_pip_cache:/root/.cache/pip \
-w /src \
archlinux:latest \
bash -lc '
set -euo pipefail
echo ">>> Updating and upgrading Arch system..."
pacman -Syu --noconfirm git python python-pip nix >/dev/null
echo ">>> Creating isolated virtual environment for pkgmgr..."
python -m venv /tmp/pkgmgr-venv
echo ">>> Activating virtual environment..."
source /tmp/pkgmgr-venv/bin/activate
echo ">>> Upgrading pip (cached)..."
python -m pip install --upgrade pip >/dev/null
echo ">>> Installing pkgmgr from current source tree (cached pip)..."
python -m pip install /src >/dev/null
echo ">>> Enabling Nix experimental features..."
export NIX_CONFIG="experimental-features = nix-command flakes"
echo ">>> Running: pkgmgr update pkgmgr --clone-mode shallow --no-verification"
pkgmgr update pkgmgr --clone-mode shallow --no-verification
echo ">>> Running: pkgmgr version pkgmgr"
pkgmgr version pkgmgr
echo ">>> Virgin Arch (root) test completed successfully."
'

79
.github/workflows/test-virgin-user.yml vendored Normal file
View File

@@ -0,0 +1,79 @@
name: Test Virgin User
on:
push:
branches:
- main
- master
- develop
- "*"
pull_request:
jobs:
test-virgin-user:
runs-on: ubuntu-latest
timeout-minutes: 45
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Show Docker version
run: docker version
- name: Virgin Arch pkgmgr user test (non-root with sudo)
run: |
set -euo pipefail
echo ">>> Starting virgin ArchLinux container test (non-root user with sudo)..."
docker run --rm \
-v "$PWD":/src \
archlinux:latest \
bash -lc '
set -euo pipefail
echo ">>> [root] Updating and upgrading Arch system..."
pacman -Syu --noconfirm git python python-pip sudo base-devel debugedit
echo ">>> [root] Creating non-root user dev..."
useradd -m dev
echo ">>> [root] Allowing passwordless sudo for dev..."
echo "dev ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/dev
chmod 0440 /etc/sudoers.d/dev
echo ">>> [root] Adjusting ownership of /src for dev..."
chown -R dev:dev /src
echo ">>> [root] Running pkgmgr flow as non-root user dev..."
sudo -u dev env PKGMGR_DISABLE_NIX_FLAKE_INSTALLER=1 bash -lc "
set -euo pipefail
cd /src
echo \">>> [dev] Using user: \$(whoami)\"
echo \">>> [dev] Running scripts/installation/main.sh...\"
bash scripts/installation/main.sh
echo \">>> [dev] Activating venv...\"
. \"\$HOME/.venvs/pkgmgr/bin/activate\"
echo \">>> [dev] Installing pkgmgr into venv via pip...\"
python -m pip install /src >/dev/null
echo \">>> [dev] PKGMGR_DISABLE_NIX_FLAKE_INSTALLER=\$PKGMGR_DISABLE_NIX_FLAKE_INSTALLER\"
echo \">>> [dev] Updating managed repo package-manager via pkgmgr...\"
pkgmgr update pkgmgr --clone-mode shallow --no-verification
echo \">>> [dev] PATH:\"
echo \"\$PATH\"
echo \">>> [dev] which pkgmgr:\"
which pkgmgr || echo \">>> [dev] pkgmgr not found in PATH\"
echo \">>> [dev] Running: pkgmgr version pkgmgr\"
pkgmgr version pkgmgr
"
echo ">>> [root] Container flow finished."
'

16
.gitignore vendored
View File

@@ -1,9 +1,6 @@
# Prevents unwanted files from being committed to version control.
# Custom Config file
config/config.yaml
# Python bytecode
__pycache__/
*.pyc
@@ -15,8 +12,18 @@ venv/
# Build artifacts
dist/
build/
build/*
*.egg-info/
pkg
src/source
package-manager-*
# debian
debian/package-manager/
debian/debhelper-build-stamp
debian/files
debian/.debhelper/
debian/package-manager.substvars
# Editor files
.vscode/
@@ -31,4 +38,3 @@ Thumbs.db
# Ignore logs
*.log
package-manager-*

View File

@@ -1,7 +1,153 @@
## [0.9.1] - 2025-12-10
* * Refactored installer: new `venv-create.sh`, cleaner root/user setup flow, updated README with architecture map.
* Split virgin tests into root/user workflows; stabilized Nix installer across distros; improved test scripts with dynamic distro selection and isolated Nix stores.
* Fixed repository directory resolution; improved `pkgmgr path` and `pkgmgr shell`; added full unit/E2E coverage.
* Removed deprecated files and updated `.gitignore`.
## [0.9.0] - 2025-12-10
* Introduce a virgin Arch-based Nix flake E2E workflow that validates pkgmgrs full flake installation path using shared caches for faster and reproducible CI runs.
## [0.8.0] - 2025-12-10
* **v0.7.15 — Installer & Command Resolution Improvements**
* Introduced a unified **layer-based installer pipeline** with clear precedence (OS-packages, Nix, Python, Makefile).
* Reworked installer structure and improved Python/Nix/Makefile installers, including isolated Python venvs and refined flake-output handling.
* Fully rewrote **command resolution** with stronger typing, safer fallbacks, and explicit support for `command: null` to mark library-only repositories.
* Added extensive **unit and integration tests** for installer capability ordering, command resolution, and Nix/Python installer behavior.
* Expanded documentation with capability hierarchy diagrams and scenario matrices.
* Removed deprecated repository entries and obsolete configuration files.
## [0.7.14] - 2025-12-10
* Fixed the clone-all integration test so that `SystemExit(0)` from the proxy is treated as a successful command instead of a failure.
## [0.7.13] - 2025-12-10
### Fix tools path resolution and add tests
- Fixed a crash in `pkgmgr code` caused by missing `directory` metadata by introducing `_resolve_repository_path()` with proper fallbacks to `repositories_base_dir` / `repositories_dir`.
- Updated `explore`, `terminal` and `code` tool commands to use the new resolver.
- Improved VS Code workspace generation and path handling.
- Added unit & E2E tests for tool commands.
## [0.7.12] - 2025-12-09
* Fixed self refering alias during setup
## [0.7.11] - 2025-12-09
* test: fix installer unit tests for OS packages and Nix dev shell
## [0.7.10] - 2025-12-09
* Fixed test_install_pkgmgr_shallow.py
## [0.7.9] - 2025-12-09
* 'main' and 'master' are now both accepted as branches for branch close merge
## [0.7.8] - 2025-12-09
* Missing pyproject.toml doesn't lead to an error during release
## [0.7.7] - 2025-12-09
* Added TEST_PATTERN parameter to execute dedicated tests
## [0.7.6] - 2025-12-09
* Fixed pull --preview bug in e2e test
## [0.7.5] - 2025-12-09
* Fixed wrong directory permissions for nix
## [0.7.4] - 2025-12-09
* Fixed missing build in test workflow -> Tests pass now
## [0.7.3] - 2025-12-09
* Fixed bug: Ignored packages are now ignored
## [0.7.2] - 2025-12-09
* Implemented Changelog Support for Fedora and Debian
## [0.7.1] - 2025-12-09
* Fix floating 'latest' tag logic: dereference annotated target (vX.Y.Z^{}), add tag message to avoid Git errors, ensure best-effort update without blocking releases, and update unit tests (see ChatGPT conversation: https://chatgpt.com/share/69383024-efa4-800f-a875-129b81fa40ff).
## [0.7.0] - 2025-12-09
* Add Git helpers for branch sync and floating 'latest' tag in the release workflow, ensure main/master are updated from origin before tagging, and extend unit/e2e tests including 'pkgmgr release --help' coverage (see ChatGPT conversation: https://chatgpt.com/share/69383024-efa4-800f-a875-129b81fa40ff)
## [0.6.0] - 2025-12-09
* Expose DISTROS and BASE_IMAGE_* variables as exported Makefile environment variables so all build and test commands can consume them dynamically. By exporting these values, every Make target (e.g., build, build-no-cache, build-missing, test-container, test-unit, test-e2e) and every delegated script in scripts/build/ and scripts/test/ now receives a consistent view of the supported distributions and their base container images. This change removes duplicated definitions across scripts, ensures reproducible builds, and allows build tooling to react automatically when new distros or base images are added to the Makefile.
## [0.5.1] - 2025-12-09
* Refine pkgmgr release CLI close wiring and integration tests for --close flag (ChatGPT: https://chatgpt.com/share/69376b4e-8440-800f-9d06-535ec1d7a40e)
## [0.5.0] - 2025-12-09
* Add pkgmgr branch close subcommand, extend CLI parser wiring, and add unit tests for branch handling and version version-selection logic (see ChatGPT conversation: https://chatgpt.com/share/693762a3-9ea8-800f-a640-bc78170953d1)
## [0.4.3] - 2025-12-09
* Implement current-directory repository selection for release and proxy commands, unify selection semantics across CLI layers, extend release workflow with --close, integrate branch closing logic, fix wiring for get_repo_identifier/get_repo_dir, update packaging files (PKGBUILD, spec, flake.nix, pyproject), and add comprehensive unit/e2e tests for release and branch commands (see ChatGPT conversation: https://chatgpt.com/share/69375cfe-9e00-800f-bd65-1bd5937e1696)
## [0.4.2] - 2025-12-09
* Wire pkgmgr release CLI to new helper and add unit tests (see ChatGPT conversation: https://chatgpt.com/share/69374f09-c760-800f-92e4-5b44a4510b62)
## [0.4.1] - 2025-12-08
* Add branch close subcommand and integrate release close/editor flow (ChatGPT: https://chatgpt.com/share/69374f09-c760-800f-92e4-5b44a4510b62)
## [0.4.0] - 2025-12-08
* Add branch closing helper and --close flag to release command, including CLI wiring and tests (see https://chatgpt.com/share/69374aec-74ec-800f-bde3-5d91dfdb9b91)
## [0.3.0] - 2025-12-08
* Massive refactor and feature expansion:
- Complete rewrite of config loading system (layered defaults + user config)
- New selection engine (--string, --category, --tag)
- Overhauled list output (colored statuses, alias highlight)
- New config update logic + default YAML sync
- Improved proxy command handling
- Full CLI routing refactor
- Expanded E2E tests for list, proxy, and selection logic
Konversation: https://chatgpt.com/share/693745c3-b8d8-800f-aa29-c8481a2ffae1
## [0.2.0] - 2025-12-08

View File

@@ -4,87 +4,6 @@
ARG BASE_IMAGE=archlinux:latest
FROM ${BASE_IMAGE}
# ------------------------------------------------------------
# System base + conditional package tool installation
#
# Important:
# - We do NOT install Nix directly here via curl.
# - Nix is installed/initialized by init-nix.sh, which is invoked
# from the system packaging hooks (Arch .install, Debian postinst,
# RPM %post).
# ------------------------------------------------------------
RUN set -e; \
if [ -f /etc/os-release ]; then . /etc/os-release; else echo "No /etc/os-release found" && exit 1; fi; \
echo "Detected base image: ${ID:-unknown} (like: ${ID_LIKE:-})"; \
\
if [ "$ID" = "arch" ]; then \
pacman -Syu --noconfirm && \
pacman -S --noconfirm --needed \
base-devel \
git \
rsync \
curl \
ca-certificates \
xz && \
pacman -Scc --noconfirm; \
elif [ "$ID" = "debian" ]; then \
apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
build-essential \
debhelper \
dpkg-dev \
git \
rsync \
bash \
curl \
ca-certificates \
xz-utils && \
rm -rf /var/lib/apt/lists/*; \
elif [ "$ID" = "ubuntu" ]; then \
apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
build-essential \
debhelper \
dpkg-dev \
git \
tzdata \
lsb-release \
rsync \
bash \
curl \
ca-certificates \
xz-utils && \
rm -rf /var/lib/apt/lists/*; \
elif [ "$ID" = "fedora" ]; then \
dnf -y update && \
dnf -y install \
git \
rsync \
rpm-build \
make \
gcc \
bash \
curl \
ca-certificates \
xz && \
dnf clean all; \
elif [ "$ID" = "centos" ]; then \
dnf -y update && \
dnf -y install \
git \
rsync \
rpm-build \
make \
gcc \
bash \
curl-minimal \
ca-certificates \
xz && \
dnf clean all; \
else \
echo "Unsupported base image: ${ID}" && exit 1; \
fi
# ------------------------------------------------------------
# Nix environment defaults
#
@@ -96,94 +15,38 @@ ENV NIX_CONFIG="experimental-features = nix-command flakes"
# ------------------------------------------------------------
# Unprivileged user for Arch package build (makepkg)
# ------------------------------------------------------------
RUN useradd -m builder || true
RUN useradd -m aur_builder || true
# ------------------------------------------------------------
# Copy scripts and install distro dependencies
# ------------------------------------------------------------
WORKDIR /build
# Copy only scripts first so dependency installation can run early
COPY scripts/ scripts/
RUN find scripts -type f -name '*.sh' -exec chmod +x {} \;
# Install distro-specific build dependencies (and AUR builder on Arch)
RUN scripts/installation/run-dependencies.sh
# ------------------------------------------------------------
# Select distro-specific Docker entrypoint
# ------------------------------------------------------------
# Docker entrypoint (distro-agnostic, nutzt run-package.sh)
# ------------------------------------------------------------
COPY scripts/docker/entry.sh /usr/local/bin/docker-entry.sh
RUN chmod +x /usr/local/bin/docker-entry.sh
# ------------------------------------------------------------
# Build and install distro-native package-manager package
#
# - Arch: PKGBUILD -> pacman -U
# - Debian: debhelper -> dpkg-buildpackage -> apt install ./package-manager_*.deb
# - Ubuntu: same as Debian
# - Fedora: rpmbuild -> dnf/dnf5/yum install package-manager-*.rpm
# - CentOS: rpmbuild -> dnf/yum install package-manager-*.rpm
#
# Nix is NOT manually installed here; it is handled by init-nix.sh.
# via Makefile `install` target (calls scripts/installation/run-package.sh)
# ------------------------------------------------------------
WORKDIR /build
COPY . .
RUN find scripts -type f -name '*.sh' -exec chmod +x {} \;
RUN set -e; \
. /etc/os-release; \
if [ "$ID" = "arch" ]; then \
echo 'Building Arch package (makepkg --nodeps)...'; \
chown -R builder:builder /build; \
su builder -c "cd /build && rm -f package-manager-*.pkg.tar.* && makepkg --noconfirm --clean --nodeps"; \
\
echo 'Installing generated Arch package...'; \
pacman -U --noconfirm package-manager-*.pkg.tar.*; \
elif [ "$ID" = "debian" ] || [ "$ID" = "ubuntu" ]; then \
echo 'Building Debian/Ubuntu package...'; \
dpkg-buildpackage -us -uc -b; \
\
echo 'Installing generated DEB package...'; \
apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y ./../package-manager_*.deb && \
rm -rf /var/lib/apt/lists/*; \
elif [ "$ID" = "fedora" ] || [ "$ID" = "centos" ]; then \
echo 'Setting up rpmbuild dirs...'; \
mkdir -p /root/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}; \
\
echo "Extracting version from package-manager.spec..."; \
version=$(grep -E '^Version:' /build/package-manager.spec | awk '{print $2}'); \
if [ -z "$version" ]; then echo 'ERROR: Version missing!' && exit 1; fi; \
srcdir="package-manager-${version}"; \
\
echo "Preparing source tree for RPM: $srcdir"; \
rm -rf "/tmp/$srcdir"; \
mkdir -p "/tmp/$srcdir"; \
cp -a /build/. "/tmp/$srcdir/"; \
\
echo "Creating source tarball: /root/rpmbuild/SOURCES/$srcdir.tar.gz"; \
tar czf "/root/rpmbuild/SOURCES/$srcdir.tar.gz" -C /tmp "$srcdir"; \
\
echo 'Copying SPEC...'; \
cp /build/package-manager.spec /root/rpmbuild/SPECS/; \
\
echo 'Running rpmbuild...'; \
cd /root/rpmbuild/SPECS && rpmbuild -bb package-manager.spec; \
\
echo 'Installing generated RPM (local, offline)...'; \
rpm_path=$(find /root/rpmbuild/RPMS -name "package-manager-*.rpm" | head -n1); \
if [ -z "$rpm_path" ]; then echo 'ERROR: RPM not found!' && exit 1; fi; \
\
if command -v dnf5 >/dev/null 2>&1; then \
echo 'Using dnf5 to install local RPM (no remote repos)...'; \
if ! dnf5 install -y --disablerepo='*' "$rpm_path"; then \
echo 'dnf5 failed, falling back to rpm -i --nodeps'; \
rpm -i --nodeps "$rpm_path"; \
fi; \
elif command -v dnf >/dev/null 2>&1; then \
echo 'Using dnf to install local RPM (no remote repos)...'; \
if ! dnf install -y --disablerepo='*' "$rpm_path"; then \
echo 'dnf failed, falling back to rpm -i --nodeps'; \
rpm -i --nodeps "$rpm_path"; \
fi; \
elif command -v yum >/dev/null 2>&1; then \
echo 'Using yum to install local RPM (no remote repos)...'; \
if ! yum localinstall -y --disablerepo='*' "$rpm_path"; then \
echo 'yum failed, falling back to rpm -i --nodeps'; \
rpm -i --nodeps "$rpm_path"; \
fi; \
else \
echo 'No dnf/dnf5/yum found, falling back to rpm -i --nodeps...'; \
rpm -i --nodeps "$rpm_path"; \
fi; \
\
rm -rf "/tmp/$srcdir"; \
else \
echo "Unsupported distro: ${ID}"; \
exit 1; \
fi; \
echo "Building and installing package-manager via make install..."; \
make install; \
rm -rf /build
# ------------------------------------------------------------
@@ -191,8 +54,5 @@ RUN set -e; \
# ------------------------------------------------------------
WORKDIR /src
COPY scripts/docker-entry-dev.sh /usr/local/bin/docker-entry-dev.sh
RUN chmod +x /usr/local/bin/docker-entry-dev.sh
ENTRYPOINT ["/usr/local/bin/docker-entry-dev.sh"]
CMD ["--help"]
ENTRYPOINT ["/usr/local/bin/docker-entry.sh"]
CMD ["pkgmgr", "--help"]

292
Makefile
View File

@@ -1,275 +1,79 @@
.PHONY: install setup uninstall aur_builder_setup \
test build build-no-cache test-unit test-e2e test-integration
# ------------------------------------------------------------
# Local Nix cache directories in the repo
# ------------------------------------------------------------
NIX_STORE_VOLUME := pkgmgr_nix_store
NIX_CACHE_VOLUME := pkgmgr_nix_cache
.PHONY: install setup uninstall \
test build build-no-cache test-unit test-e2e test-integration \
test-container
# ------------------------------------------------------------
# Distro list and base images
# (kept for documentation/reference; actual build logic is in scripts/build)
# ------------------------------------------------------------
DISTROS := arch debian ubuntu fedora centos
DISTROS := arch debian ubuntu fedora centos
BASE_IMAGE_ARCH := archlinux:latest
BASE_IMAGE_DEBIAN := debian:stable-slim
BASE_IMAGE_UBUNTU := ubuntu:latest
BASE_IMAGE_FEDORA := fedora:latest
BASE_IMAGE_CENTOS := quay.io/centos/centos:stream9
BASE_IMAGE_arch := archlinux:latest
BASE_IMAGE_debian := debian:stable-slim
BASE_IMAGE_ubuntu := ubuntu:latest
BASE_IMAGE_fedora := fedora:latest
BASE_IMAGE_centos := quay.io/centos/centos:stream9
# Make them available in scripts
export DISTROS
export BASE_IMAGE_ARCH
export BASE_IMAGE_DEBIAN
export BASE_IMAGE_UBUNTU
export BASE_IMAGE_FEDORA
export BASE_IMAGE_CENTOS
# Helper to echo which image is used for which distro (purely informational)
define echo_build_info
@echo "Building image for distro '$(1)' with base image '$(2)'..."
endef
# PYthon Unittest Pattern
TEST_PATTERN := test_*.py
export TEST_PATTERN
# ------------------------------------------------------------
# PKGMGR setup (wrapper)
# PKGMGR setup (developer wrapper -> scripts/installation/main.sh)
# ------------------------------------------------------------
setup: install
@echo "Running pkgmgr setup via main.py..."
@if [ -x "$$HOME/.venvs/pkgmgr/bin/python" ]; then \
echo "Using virtualenv Python at $$HOME/.venvs/pkgmgr/bin/python"; \
"$$HOME/.venvs/pkgmgr/bin/python" main.py install; \
else \
echo "Virtualenv not found, falling back to system python3"; \
python3 main.py install; \
fi
setup:
@bash scripts/installation/main.sh
# ------------------------------------------------------------
# Docker build targets: build all images
# Docker build targets (delegated to scripts/build)
# ------------------------------------------------------------
build-no-cache:
@for distro in $(DISTROS); do \
case "$$distro" in \
arch) base_image="$(BASE_IMAGE_arch)" ;; \
debian) base_image="$(BASE_IMAGE_debian)" ;; \
ubuntu) base_image="$(BASE_IMAGE_ubuntu)" ;; \
fedora) base_image="$(BASE_IMAGE_fedora)" ;; \
centos) base_image="$(BASE_IMAGE_centos)" ;; \
*) echo "Unknown distro '$$distro'" >&2; exit 1 ;; \
esac; \
echo "Building test image 'package-manager-test-$$distro' with no cache (BASE_IMAGE=$$base_image)..."; \
docker build --no-cache \
--build-arg BASE_IMAGE="$$base_image" \
-t "package-manager-test-$$distro" . || exit $$?; \
done
@bash scripts/build/build-image-no-cache.sh
build:
@for distro in $(DISTROS); do \
case "$$distro" in \
arch) base_image="$(BASE_IMAGE_arch)" ;; \
debian) base_image="$(BASE_IMAGE_debian)" ;; \
ubuntu) base_image="$(BASE_IMAGE_ubuntu)" ;; \
fedora) base_image="$(BASE_IMAGE_fedora)" ;; \
centos) base_image="$(BASE_IMAGE_centos)" ;; \
*) echo "Unknown distro '$$distro'" >&2; exit 1 ;; \
esac; \
echo "Building test image 'package-manager-test-$$distro' (BASE_IMAGE=$$base_image)..."; \
docker build \
--build-arg BASE_IMAGE="$$base_image" \
-t "package-manager-test-$$distro" . || exit $$?; \
done
build-arch:
@base_image="$(BASE_IMAGE_arch)"; \
echo "Building test image 'package-manager-test-arch' (BASE_IMAGE=$$base_image)..."; \
docker build \
--build-arg BASE_IMAGE="$$base_image" \
-t "package-manager-test-arch" . || exit $$?;
@bash scripts/build/build-image.sh
# ------------------------------------------------------------
# Test targets
# Test targets (delegated to scripts/test)
# ------------------------------------------------------------
# Unit tests: only in Arch container (fastest feedback), via Nix devShell
test-unit: build-arch
@echo "============================================================"
@echo ">>> Running UNIT tests in Arch container (via Nix devShell)"
@echo "============================================================"
docker run --rm \
-v "$$(pwd):/src" \
--workdir /src \
--entrypoint bash \
"package-manager-test-arch" \
-c '\
set -e; \
if [ -f /etc/os-release ]; then . /etc/os-release; fi; \
echo "Detected container distro: $${ID:-unknown} (like: $${ID_LIKE:-})"; \
echo "Running Python unit tests (tests/unit) via nix develop..."; \
git config --global --add safe.directory /src || true; \
cd /src; \
nix develop .#default --no-write-lock-file -c \
python -m unittest discover \
-s tests/unit \
-t /src \
-p "test_*.py"; \
'
test-unit: build-missing
@bash scripts/test/test-unit.sh
# Integration tests: also in Arch container, via Nix devShell
test-integration: build-arch
@echo "============================================================"
@echo ">>> Running INTEGRATION tests in Arch container (via Nix devShell)"
@echo "============================================================"
docker run --rm \
-v "$$(pwd):/src" \
--workdir /src \
--entrypoint bash \
"package-manager-test-arch" \
-c '\
set -e; \
if [ -f /etc/os-release ]; then . /etc/os-release; fi; \
echo "Detected container distro: $${ID:-unknown} (like: $${ID_LIKE:-})"; \
echo "Running Python integration tests (tests/integration) via nix develop..."; \
git config --global --add safe.directory /src || true; \
cd /src; \
nix develop .#default --no-write-lock-file -c \
python -m unittest discover \
-s tests/integration \
-t /src \
-p "test_*.py"; \
'
test-integration: build-missing
@bash scripts/test/test-integration.sh
# End-to-end tests: run in all distros via Nix devShell (tests/e2e)
test-e2e: build
@echo "Ensuring Docker Nix volumes exist (auto-created if missing)..."
@echo "Running E2E tests inside Nix devShell with cached store for all distros: $(DISTROS)"
test-e2e: build-missing
@bash scripts/test/test-e2e.sh
@for distro in $(DISTROS); do \
echo "============================================================"; \
echo ">>> Running E2E tests in container for distro: $$distro"; \
echo "============================================================"; \
# Only for Arch: mount /nix as volume, for others use image-installed Nix \
if [ "$$distro" = "arch" ]; then \
NIX_STORE_MOUNT="-v $(NIX_STORE_VOLUME):/nix"; \
else \
NIX_STORE_MOUNT=""; \
fi; \
docker run --rm \
-v "$$(pwd):/src" \
$$NIX_STORE_MOUNT \
-v "$(NIX_CACHE_VOLUME):/root/.cache/nix" \
--workdir /src \
--entrypoint bash \
"package-manager-test-$$distro" \
-c '\
set -e; \
if [ -f /etc/os-release ]; then . /etc/os-release; fi; \
echo "Detected container distro: $${ID:-unknown} (like: $${ID_LIKE:-})"; \
echo "Preparing Nix environment..."; \
if [ -f "/nix/var/nix/profiles/default/etc/profile.d/nix-daemon.sh" ]; then \
. "/nix/var/nix/profiles/default/etc/profile.d/nix-daemon.sh"; \
fi; \
if [ -f "$$HOME/.nix-profile/etc/profile.d/nix.sh" ]; then \
. "$$HOME/.nix-profile/etc/profile.d/nix.sh"; \
fi; \
PATH="/nix/var/nix/profiles/default/bin:$$HOME/.nix-profile/bin:$$PATH"; \
export PATH; \
echo "PATH is now:"; \
echo "$$PATH"; \
NIX_CMD=""; \
if command -v nix >/dev/null 2>&1; then \
echo "Found nix on PATH:"; \
command -v nix; \
NIX_CMD="nix"; \
else \
echo "nix not found on PATH, scanning /nix/store for a nix binary..."; \
for path in /nix/store/*-nix-*/bin/nix; do \
if [ -x "$$path" ]; then \
echo "Found nix binary at $$path"; \
NIX_CMD="$$path"; \
break; \
fi; \
done; \
fi; \
if [ -z "$$NIX_CMD" ]; then \
echo "ERROR: nix binary not found anywhere cannot run devShell"; \
exit 1; \
fi; \
echo "Using Nix command: $$NIX_CMD"; \
echo "Run E2E tests inside Nix devShell (tests/e2e)..."; \
git config --global --add safe.directory /src || true; \
cd /src; \
"$$NIX_CMD" develop .#default --no-write-lock-file -c \
python3 -m unittest discover \
-s /src/tests/e2e \
-p "test_*.py"; \
' || exit $$?; \
done
# Combined test target for local + CI (unit + e2e + integration)
test: build test-unit test-e2e test-integration
test-container: build-missing
@bash scripts/test/test-container.sh
# ------------------------------------------------------------
# Installer for host systems (original logic)
# Build only missing container images
# ------------------------------------------------------------
build-missing:
@bash scripts/build/build-image-missing.sh
# Combined test target for local + CI (unit + integration + e2e)
test: test-container test-unit test-integration test-e2e
# ------------------------------------------------------------
# System install (native packages, calls scripts/installation/run-package.sh)
# ------------------------------------------------------------
install:
@if [ -n "$$IN_NIX_SHELL" ]; then \
echo "Nix shell detected (IN_NIX_SHELL=1). Skipping venv/pip install handled by Nix flake."; \
else \
echo "Making 'main.py' executable..."; \
chmod +x main.py; \
echo "Checking if global user virtual environment exists..."; \
mkdir -p "$$HOME/.venvs"; \
if [ ! -d "$$HOME/.venvs/pkgmgr" ]; then \
echo "Creating global venv at $$HOME/.venvs/pkgmgr..."; \
python3 -m venv "$$HOME/.venvs/pkgmgr"; \
fi; \
echo "Installing required Python packages into $$HOME/.venvs/pkgmgr..."; \
"$$HOME/.venvs/pkgmgr/bin/python" -m ensurepip --upgrade; \
"$$HOME/.venvs/pkgmgr/bin/pip" install --upgrade pip setuptools wheel; \
echo "Looking for requirements.txt / _requirements.txt..."; \
if [ -f requirements.txt ]; then \
echo "Installing Python packages from requirements.txt..."; \
"$$HOME/.venvs/pkgmgr/bin/pip" install -r requirements.txt; \
elif [ -f _requirements.txt ]; then \
echo "Installing Python packages from _requirements.txt..."; \
"$$HOME/.venvs/pkgmgr/bin/pip" install -r _requirements.txt; \
else \
echo "No requirements.txt or _requirements.txt found, skipping dependency installation."; \
fi; \
echo "Ensuring $$HOME/.bashrc and $$HOME/.zshrc exist..."; \
touch "$$HOME/.bashrc" "$$HOME/.zshrc"; \
echo "Ensuring automatic activation of $$HOME/.venvs/pkgmgr for this user..."; \
for rc in "$$HOME/.bashrc" "$$HOME/.zshrc"; do \
rc_line='if [ -d "$${HOME}/.venvs/pkgmgr" ]; then . "$${HOME}/.venvs/pkgmgr/bin/activate"; if [ -n "$${PS1:-}" ]; then echo "Global Python virtual environment '\''~/.venvs/pkgmgr'\'' activated."; fi; fi'; \
grep -qxF "$${rc_line}" "$$rc" || echo "$${rc_line}" >> "$$rc"; \
done; \
echo "Arch/Manjaro detection and optional AUR setup..."; \
if command -v pacman >/dev/null 2>&1; then \
$(MAKE) aur_builder_setup; \
else \
echo "Not Arch-based (no pacman). Skipping aur_builder/yay setup."; \
fi; \
echo "Installation complete. Please restart your shell (or 'exec bash' or 'exec zsh') for the changes to take effect."; \
fi
# ------------------------------------------------------------
# AUR builder setup — only on Arch/Manjaro
# ------------------------------------------------------------
aur_builder_setup:
@echo "Setting up aur_builder and yay (Arch/Manjaro)..."
@sudo pacman -Syu --noconfirm
@sudo pacman -S --needed --noconfirm base-devel git sudo
@if ! getent group aur_builder >/dev/null; then sudo groupadd -r aur_builder; fi
@if ! id -u aur_builder >/dev/null 2>&1; then sudo useradd -m -r -g aur_builder -s /bin/bash aur_builder; fi
@echo '%aur_builder ALL=(ALL) NOPASSWD: /usr/bin/pacman' | sudo tee /etc/sudoers.d/aur_builder >/dev/null
@sudo chmod 0440 /etc/sudoers.d/aur_builder
@if ! sudo -u aur_builder bash -lc 'command -v yay >/dev/null'; then \
sudo -u aur_builder bash -lc 'cd ~ && rm -rf yay && git clone https://aur.archlinux.org/yay.git && cd yay && makepkg -si --noconfirm'; \
else \
echo "yay already installed."; \
fi
@echo "aur_builder/yay setup complete."
@echo "Building and installing distro-native package-manager for this system..."
@bash scripts/installation/run-package.sh
# ------------------------------------------------------------
# Uninstall target
# ------------------------------------------------------------
uninstall:
@echo "Removing global user virtual environment if it exists..."
@rm -rf "$$HOME/.venvs/pkgmgr"
@echo "Cleaning up $$HOME/.bashrc and $$HOME/.zshrc entries..."
@for rc in "$$HOME/.bashrc" "$$HOME/.zshrc"; do \
sed -i '/\.venvs\/pkgmgr\/bin\/activate"; if \[ -n "\$${PS1:-}" \]; then echo "Global Python virtual environment '\''~\/\.venvs\/pkgmgr'\'' activated."; fi; fi/d' "$$rc"; \
done
@echo "Uninstallation complete. Please restart your shell (or 'exec bash' or 'exec zsh') for the changes to fully apply."
@bash scripts/uninstall.sh

View File

@@ -1,7 +1,7 @@
# Maintainer: Kevin Veen-Birkenbach <info@veen.world>
pkgname=package-manager
pkgver=0.4.0
pkgver=0.9.1
pkgrel=1
pkgdesc="Local-flake wrapper for Kevin's package-manager (Nix-based)."
arch=('any')

View File

@@ -24,6 +24,15 @@
- **Custom Aliases:**
Generate and manage custom aliases for easy command invocation.
## Architecture & Setup Map 🗺️
The following diagram provides a full overview of PKGMGRs package structure,
installation layers, and setup controller flow:
![PKGMGR Architecture](assets/map.png)
**Diagram status:** *Stand: 10. Dezember 2025*
**Always-up-to-date version:** https://s.veen.world/pkgmgrmp
## Installation ⚙️
@@ -51,55 +60,6 @@ The `make setup` command will:
- Install required packages from `requirements.txt`.
- Execute `python main.py install` to complete the installation.
## Docker Quickstart 🐳
Alternatively to installing locally, you can use Docker: build the image with
```bash
docker build --no-cache -t pkgmgr .
```
or alternativ pull it via
```bash
docker pull kevinveenbirkenbach/pkgmgr:latest
```
and then run
```bash
docker run --rm pkgmgr --help
```
## Usage 📖
Run the script with different commands. For example:
- **Install all packages:**
```bash
pkgmgr install --all
```
- **Pull updates for a specific repository:**
```bash
pkgmgr pull pkgmgr
```
- **Commit changes with extra Git parameters:**
```bash
pkgmgr commit pkgmgr -- -m "Your commit message"
```
- **List all configured packages:**
```bash
pkgmgr config show
```
- **Manage configuration:**
```bash
pkgmgr config init
pkgmgr config add
pkgmgr config edit
pkgmgr config delete <identifier>
pkgmgr config ignore <identifier> --set true
```
## License 📄
This project is licensed under the MIT License.
@@ -108,9 +68,3 @@ This project is licensed under the MIT License.
Kevin Veen-Birkenbach
[https://www.veen.world](https://www.veen.world)
---
**Repository:** [github.com/kevinveenbirkenbach/package-manager](https://github.com/kevinveenbirkenbach/package-manager)
*Created with AI 🤖 - [View conversation](https://chatgpt.com/share/67c728c4-92d0-800f-8945-003fa9bf27c6)*

BIN
assets/map.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 MiB

View File

@@ -380,17 +380,6 @@ repositories:
- 44D8F11FD62F878E
- B5690EEEBB952194
- account: kevinveenbirkenbach
alias: infinito-presentation
description: This repository contains a Infinito.Nexus presentation designed for customers, end-users, investors, developers, and administrators, offering tailored content and insights for each group.
homepage: https://github.com/kevinveenbirkenbach/infinito-presentation
provider: github.com
repository: infinito-presentation
verified:
gpg_keys:
- 44D8F11FD62F878E
- B5690EEEBB952194
- account: kevinveenbirkenbach
description: A lightweight Python utility to generate dynamic color schemes from a single base color. Provides HSL-based color transformations for theming, UI design, and CSS variable generation. Optimized for integration in Python projects, Flask applications, and Ansible roles.
homepage: https://github.com/kevinveenbirkenbach/colorscheme-generator
@@ -599,17 +588,6 @@ repositories:
- 44D8F11FD62F878E
- B5690EEEBB952194
- account: kevinveenbirkenbach
desciption: Infinito Inventory Builder — a containerized web application that dynamically generates Ansible inventory files from invokable Infinito.Nexus roles through an interactive, browser-based interface.
homepage: https://github.com/kevinveenbirkenbach/infinito-inventory-builder
alias: invbuild
provider: github.com
repository: infinito-inventory-builder
verified:
gpg_keys:
- 44D8F11FD62F878E
- B5690EEEBB952194
- account: kevinveenbirkenbach
desciption: A simple Python CLI tool to safely rename Linux user accounts using usermod — including home directory migration and validation checks.
homepage: https://github.com/kevinveenbirkenbach/user-rename

168
debian/changelog vendored
View File

@@ -1,9 +1,177 @@
package-manager (0.9.1-1) unstable; urgency=medium
* * Refactored installer: new `venv-create.sh`, cleaner root/user setup flow, updated README with architecture map.
* Split virgin tests into root/user workflows; stabilized Nix installer across distros; improved test scripts with dynamic distro selection and isolated Nix stores.
* Fixed repository directory resolution; improved `pkgmgr path` and `pkgmgr shell`; added full unit/E2E coverage.
* Removed deprecated files and updated `.gitignore`.
-- Kevin Veen-Birkenbach <kevin@veen.world> Wed, 10 Dec 2025 22:56:01 +0100
package-manager (0.9.0-1) unstable; urgency=medium
* Introduce a virgin Arch-based Nix flake E2E workflow that validates pkgmgrs full flake installation path using shared caches for faster and reproducible CI runs.
-- Kevin Veen-Birkenbach <kevin@veen.world> Wed, 10 Dec 2025 18:38:07 +0100
package-manager (0.8.0-1) unstable; urgency=medium
* **v0.7.15 — Installer & Command Resolution Improvements**
* Introduced a unified **layer-based installer pipeline** with clear precedence (OS-packages, Nix, Python, Makefile).
* Reworked installer structure and improved Python/Nix/Makefile installers, including isolated Python venvs and refined flake-output handling.
* Fully rewrote **command resolution** with stronger typing, safer fallbacks, and explicit support for `command: null` to mark library-only repositories.
* Added extensive **unit and integration tests** for installer capability ordering, command resolution, and Nix/Python installer behavior.
* Expanded documentation with capability hierarchy diagrams and scenario matrices.
* Removed deprecated repository entries and obsolete configuration files.
-- Kevin Veen-Birkenbach <kevin@veen.world> Wed, 10 Dec 2025 17:31:57 +0100
package-manager (0.7.14-1) unstable; urgency=medium
* Fixed the clone-all integration test so that `SystemExit(0)` from the proxy is treated as a successful command instead of a failure.
-- Kevin Veen-Birkenbach <kevin@veen.world> Wed, 10 Dec 2025 10:38:33 +0100
package-manager (0.7.13-1) unstable; urgency=medium
* Automated release.
-- Kevin Veen-Birkenbach <kevin@veen.world> Wed, 10 Dec 2025 10:27:24 +0100
package-manager (0.7.12-1) unstable; urgency=medium
* Fixed self refering alias during setup
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 23:36:35 +0100
package-manager (0.7.11-1) unstable; urgency=medium
* test: fix installer unit tests for OS packages and Nix dev shell
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 23:16:46 +0100
package-manager (0.7.10-1) unstable; urgency=medium
* Fixed test_install_pkgmgr_shallow.py
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 22:57:08 +0100
package-manager (0.7.9-1) unstable; urgency=medium
* 'main' and 'master' are now both accepted as branches for branch close merge
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 21:19:13 +0100
package-manager (0.7.8-1) unstable; urgency=medium
* Missing pyproject.toml doesn't lead to an error during release
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 21:03:24 +0100
package-manager (0.7.7-1) unstable; urgency=medium
* Added TEST_PATTERN parameter to execute dedicated tests
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 17:54:38 +0100
package-manager (0.7.6-1) unstable; urgency=medium
* Fixed pull --preview bug in e2e test
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 17:14:19 +0100
package-manager (0.7.5-1) unstable; urgency=medium
* Fixed wrong directory permissions for nix
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 16:45:42 +0100
package-manager (0.7.4-1) unstable; urgency=medium
* Fixed missing build in test workflow -> Tests pass now
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 16:22:00 +0100
package-manager (0.7.3-1) unstable; urgency=medium
* Fixed bug: Ignored packages are now ignored
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 16:08:31 +0100
package-manager (0.7.2-1) unstable; urgency=medium
* Implemented Changelog Support for Fedora and Debian
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 15:48:58 +0100
package-manager (0.7.1-1) unstable; urgency=medium
* Fix floating 'latest' tag logic: dereference annotated target (vX.Y.Z^{}), add tag message to avoid Git errors, ensure best-effort update without blocking releases, and update unit tests (see ChatGPT conversation: https://chatgpt.com/share/69383024-efa4-800f-a875-129b81fa40ff).
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 15:26:54 +0100
package-manager (0.7.0-1) unstable; urgency=medium
* Add Git helpers for branch sync and floating 'latest' tag in the release workflow, ensure main/master are updated from origin before tagging, and extend unit/e2e tests including 'pkgmgr release --help' coverage (see ChatGPT conversation: https://chatgpt.com/share/69383024-efa4-800f-a875-129b81fa40ff)
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 15:21:03 +0100
package-manager (0.6.0-1) unstable; urgency=medium
* Expose DISTROS and BASE_IMAGE_* variables as exported Makefile environment variables so all build and test commands can consume them dynamically. By exporting these values, every Make target (e.g., build, build-no-cache, build-missing, test-container, test-unit, test-e2e) and every delegated script in scripts/build/ and scripts/test/ now receives a consistent view of the supported distributions and their base container images. This change removes duplicated definitions across scripts, ensures reproducible builds, and allows build tooling to react automatically when new distros or base images are added to the Makefile.
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 05:59:58 +0100
package-manager (0.5.1-1) unstable; urgency=medium
* Refine pkgmgr release CLI close wiring and integration tests for --close flag (ChatGPT: https://chatgpt.com/share/69376b4e-8440-800f-9d06-535ec1d7a40e)
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 01:21:31 +0100
package-manager (0.5.0-1) unstable; urgency=medium
* Add pkgmgr branch close subcommand, extend CLI parser wiring, and add unit tests for branch handling and version version-selection logic (see ChatGPT conversation: https://chatgpt.com/share/693762a3-9ea8-800f-a640-bc78170953d1)
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 00:44:16 +0100
package-manager (0.4.3-1) unstable; urgency=medium
* Implement current-directory repository selection for release and proxy commands, unify selection semantics across CLI layers, extend release workflow with --close, integrate branch closing logic, fix wiring for get_repo_identifier/get_repo_dir, update packaging files (PKGBUILD, spec, flake.nix, pyproject), and add comprehensive unit/e2e tests for release and branch commands (see ChatGPT conversation: https://chatgpt.com/share/69375cfe-9e00-800f-bd65-1bd5937e1696)
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 00:29:08 +0100
package-manager (0.4.2-1) unstable; urgency=medium
* Wire pkgmgr release CLI to new helper and add unit tests (see ChatGPT conversation: https://chatgpt.com/share/69374f09-c760-800f-92e4-5b44a4510b62)
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 00:03:46 +0100
package-manager (0.4.1-1) unstable; urgency=medium
* Add branch close subcommand and integrate release close/editor flow (ChatGPT: https://chatgpt.com/share/69374f09-c760-800f-92e4-5b44a4510b62)
-- Kevin Veen-Birkenbach <kevin@veen.world> Mon, 08 Dec 2025 23:20:28 +0100
package-manager (0.4.0-1) unstable; urgency=medium
* Add branch closing helper and --close flag to release command, including CLI wiring and tests (see https://chatgpt.com/share/69374aec-74ec-800f-bde3-5d91dfdb9b91)
-- Kevin Veen-Birkenbach <kevin@veen.world> Mon, 08 Dec 2025 23:02:43 +0100
package-manager (0.3.0-1) unstable; urgency=medium
* Massive refactor and feature expansion:
- Complete rewrite of config loading system (layered defaults + user config)
- New selection engine (--string, --category, --tag)
- Overhauled list output (colored statuses, alias highlight)
- New config update logic + default YAML sync
- Improved proxy command handling
- Full CLI routing refactor
- Expanded E2E tests for list, proxy, and selection logic
Konversation: https://chatgpt.com/share/693745c3-b8d8-800f-aa29-c8481a2ffae1
-- Kevin Veen-Birkenbach <kevin@veen.world> Mon, 08 Dec 2025 22:40:49 +0100
package-manager (0.2.0-1) unstable; urgency=medium
* Add preview-first release workflow and extended packaging support (see ChatGPT conversation: https://chatgpt.com/share/693722b4-af9c-800f-bccc-8a4036e99630)

View File

@@ -31,7 +31,7 @@
rec {
pkgmgr = pyPkgs.buildPythonApplication {
pname = "package-manager";
version = "0.4.0";
version = "0.9.1";
# Use the git repo as source
src = ./.;
@@ -48,9 +48,7 @@
# Runtime dependencies (matches [project.dependencies])
propagatedBuildInputs = [
pyPkgs.pyyaml
# Add more here if needed, e.g.:
# pyPkgs.click
# pyPkgs.rich
pyPkgs.pip
];
doCheck = false;
@@ -72,10 +70,16 @@
ansiblePkg =
if pkgs ? ansible-core then pkgs.ansible-core
else pkgs.ansible;
# Python 3 + pip für alles, was "python3 -m pip" macht
pythonWithPip = pkgs.python3.withPackages (ps: [
ps.pip
]);
in
{
default = pkgs.mkShell {
buildInputs = [
pythonWithPip
pkgmgrPkg
pkgs.git
ansiblePkg

View File

@@ -1,5 +1,5 @@
Name: package-manager
Version: 0.4.0
Version: 0.9.1
Release: 1%{?dist}
Summary: Wrapper that runs Kevin's package-manager via Nix flake
@@ -35,35 +35,36 @@ available on the system.
%install
rm -rf %{buildroot}
install -d %{buildroot}%{_bindir}
install -d %{buildroot}%{_libdir}/package-manager
# Install project tree into a fixed, architecture-independent location.
install -d %{buildroot}/usr/lib/package-manager
# Copy full project source into /usr/lib/package-manager
cp -a . %{buildroot}%{_libdir}/package-manager/
cp -a . %{buildroot}/usr/lib/package-manager/
# Wrapper
install -m0755 scripts/pkgmgr-wrapper.sh %{buildroot}%{_bindir}/pkgmgr
# Shared Nix init script (ensure it is executable in the installed tree)
install -m0755 scripts/init-nix.sh %{buildroot}%{_libdir}/package-manager/init-nix.sh
install -m0755 scripts/init-nix.sh %{buildroot}/usr/lib/package-manager/init-nix.sh
# Remove packaging-only and development artefacts from the installed tree
rm -rf \
%{buildroot}%{_libdir}/package-manager/PKGBUILD \
%{buildroot}%{_libdir}/package-manager/Dockerfile \
%{buildroot}%{_libdir}/package-manager/debian \
%{buildroot}%{_libdir}/package-manager/.git \
%{buildroot}%{_libdir}/package-manager/.github \
%{buildroot}%{_libdir}/package-manager/tests \
%{buildroot}%{_libdir}/package-manager/.gitignore \
%{buildroot}%{_libdir}/package-manager/__pycache__ \
%{buildroot}%{_libdir}/package-manager/.gitkeep || true
%{buildroot}/usr/lib/package-manager/PKGBUILD \
%{buildroot}/usr/lib/package-manager/Dockerfile \
%{buildroot}/usr/lib/package-manager/debian \
%{buildroot}/usr/lib/package-manager/.git \
%{buildroot}/usr/lib/package-manager/.github \
%{buildroot}/usr/lib/package-manager/tests \
%{buildroot}/usr/lib/package-manager/.gitignore \
%{buildroot}/usr/lib/package-manager/__pycache__ \
%{buildroot}/usr/lib/package-manager/.gitkeep || true
%post
# Initialize Nix (if needed) after installing the package-manager files.
if [ -x %{_libdir}/package-manager/init-nix.sh ]; then
%{_libdir}/package-manager/init-nix.sh || true
if [ -x /usr/lib/package-manager/init-nix.sh ]; then
/usr/lib/package-manager/init-nix.sh || true
else
echo ">>> Warning: %{_libdir}/package-manager/init-nix.sh not found or not executable."
echo ">>> Warning: /usr/lib/package-manager/init-nix.sh not found or not executable."
fi
%postun
@@ -73,8 +74,66 @@ echo ">>> package-manager removed. Nix itself was not removed."
%doc README.md
%license LICENSE
%{_bindir}/pkgmgr
%{_libdir}/package-manager/
/usr/lib/package-manager/
%changelog
* Wed Dec 10 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.9.1-1
- * Refactored installer: new `venv-create.sh`, cleaner root/user setup flow, updated README with architecture map.
* Split virgin tests into root/user workflows; stabilized Nix installer across distros; improved test scripts with dynamic distro selection and isolated Nix stores.
* Fixed repository directory resolution; improved `pkgmgr path` and `pkgmgr shell`; added full unit/E2E coverage.
* Removed deprecated files and updated `.gitignore`.
* Wed Dec 10 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.9.0-1
- Introduce a virgin Arch-based Nix flake E2E workflow that validates pkgmgrs full flake installation path using shared caches for faster and reproducible CI runs.
* Wed Dec 10 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.8.0-1
- **v0.7.15 — Installer & Command Resolution Improvements**
* Introduced a unified **layer-based installer pipeline** with clear precedence (OS-packages, Nix, Python, Makefile).
* Reworked installer structure and improved Python/Nix/Makefile installers, including isolated Python venvs and refined flake-output handling.
* Fully rewrote **command resolution** with stronger typing, safer fallbacks, and explicit support for `command: null` to mark library-only repositories.
* Added extensive **unit and integration tests** for installer capability ordering, command resolution, and Nix/Python installer behavior.
* Expanded documentation with capability hierarchy diagrams and scenario matrices.
* Removed deprecated repository entries and obsolete configuration files.
* Wed Dec 10 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.14-1
- Fixed the clone-all integration test so that `SystemExit(0)` from the proxy is treated as a successful command instead of a failure.
* Wed Dec 10 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.13-1
- Automated release.
* Tue Dec 09 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.12-1
- Fixed self refering alias during setup
* Tue Dec 09 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.11-1
- test: fix installer unit tests for OS packages and Nix dev shell
* Tue Dec 09 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.10-1
- Fixed test_install_pkgmgr_shallow.py
* Tue Dec 09 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.9-1
- 'main' and 'master' are now both accepted as branches for branch close merge
* Tue Dec 09 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.8-1
- Missing pyproject.toml doesn't lead to an error during release
* Tue Dec 09 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.7-1
- Added TEST_PATTERN parameter to execute dedicated tests
* Tue Dec 09 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.6-1
- Fixed pull --preview bug in e2e test
* Tue Dec 09 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.5-1
- Fixed wrong directory permissions for nix
* Tue Dec 09 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.4-1
- Fixed missing build in test workflow -> Tests pass now
* Tue Dec 09 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.3-1
- Fixed bug: Ignored packages are now ignored
* Tue Dec 09 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.2-1
- Implemented Changelog Support for Fedora and Debian
* Sat Dec 06 2025 Kevin Veen-Birkenbach <info@veen.world> - 0.1.1-1
- Initial RPM packaging for package-manager

View File

@@ -1,7 +0,0 @@
version: 1
author: "Kevin Veen-Birkenbach"
url: "https://github.com/kevinveenbirkenbach/package-manager"
description: "A configurable Python-based package manager for managing multiple repositories via Bash."
dependencies: []

View File

@@ -1,3 +1,4 @@
# pkgmgr/actions/branch/__init__.py
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
@@ -5,40 +6,53 @@
High-level helpers for branch-related operations.
This module encapsulates the actual Git logic so the CLI layer
(pkgmgr.cli_core.commands.branch) stays thin and testable.
(pkgmgr.cli.commands.branch) stays thin and testable.
"""
from __future__ import annotations
from typing import Optional
from pkgmgr.git_utils import run_git, GitError, get_current_branch
from pkgmgr.core.git import run_git, GitError, get_current_branch
# ---------------------------------------------------------------------------
# Branch creation (open)
# ---------------------------------------------------------------------------
def open_branch(
name: Optional[str],
base_branch: str = "main",
fallback_base: str = "master",
cwd: str = ".",
) -> None:
"""
Create and push a new feature branch on top of `base_branch`.
Create and push a new feature branch on top of a base branch.
The base branch is resolved by:
1. Trying 'base_branch' (default: 'main')
2. Falling back to 'fallback_base' (default: 'master')
Steps:
1) git fetch origin
2) git checkout <base_branch>
3) git pull origin <base_branch>
4) git checkout -b <name>
5) git push -u origin <name>
1) git fetch origin
2) git checkout <resolved_base>
3) git pull origin <resolved_base>
4) git checkout -b <name>
5) git push -u origin <name>
If `name` is None or empty, the user is prompted on stdin.
If `name` is None or empty, the user is prompted to enter one.
"""
# Request name interactively if not provided
if not name:
name = input("Enter new branch name: ").strip()
if not name:
raise RuntimeError("Branch name must not be empty.")
# Resolve which base branch to use (main or master)
resolved_base = _resolve_base_branch(base_branch, fallback_base, cwd=cwd)
# 1) Fetch from origin
try:
run_git(["fetch", "origin"], cwd=cwd)
@@ -49,18 +63,18 @@ def open_branch(
# 2) Checkout base branch
try:
run_git(["checkout", base_branch], cwd=cwd)
run_git(["checkout", resolved_base], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to checkout base branch {base_branch!r}: {exc}"
f"Failed to checkout base branch {resolved_base!r}: {exc}"
) from exc
# 3) Pull latest changes on base
# 3) Pull latest changes for base branch
try:
run_git(["pull", "origin", base_branch], cwd=cwd)
run_git(["pull", "origin", resolved_base], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to pull latest changes for base branch {base_branch!r}: {exc}"
f"Failed to pull latest changes for base branch {resolved_base!r}: {exc}"
) from exc
# 4) Create new branch
@@ -68,10 +82,10 @@ def open_branch(
run_git(["checkout", "-b", name], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to create new branch {name!r} from base {base_branch!r}: {exc}"
f"Failed to create new branch {name!r} from base {resolved_base!r}: {exc}"
) from exc
# 5) Push and set upstream
# 5) Push new branch to origin
try:
run_git(["push", "-u", "origin", name], cwd=cwd)
except GitError as exc:
@@ -80,15 +94,21 @@ def open_branch(
) from exc
# ---------------------------------------------------------------------------
# Base branch resolver (shared by open/close)
# ---------------------------------------------------------------------------
def _resolve_base_branch(
preferred: str,
fallback: str,
cwd: str,
) -> str:
"""
Resolve the base branch to use for merging.
Resolve the base branch to use.
Try `preferred` first (default: main),
fall back to `fallback` (default: master).
Try `preferred` (default: main) first, then `fallback` (default: master).
Raise RuntimeError if neither exists.
"""
for candidate in (preferred, fallback):
@@ -103,6 +123,10 @@ def _resolve_base_branch(
)
# ---------------------------------------------------------------------------
# Branch closing (merge + deletion)
# ---------------------------------------------------------------------------
def close_branch(
name: Optional[str],
base_branch: str = "main",
@@ -110,23 +134,22 @@ def close_branch(
cwd: str = ".",
) -> None:
"""
Merge a feature branch into the main/master branch and optionally delete it.
Merge a feature branch into the base branch and delete it afterwards.
Steps:
1) Determine branch name (argument or current branch)
2) Resolve base branch (prefers `base_branch`, falls back to `fallback_base`)
3) Ask for confirmation (y/N)
4) git fetch origin
5) git checkout <base>
6) git pull origin <base>
7) git merge --no-ff <name>
8) git push origin <base>
9) Delete branch locally and on origin
If the user does not confirm with 'y', the operation is aborted.
1) Determine the branch name (argument or current branch)
2) Resolve base branch (main/master)
3) Ask for confirmation
4) git fetch origin
5) git checkout <base>
6) git pull origin <base>
7) git merge --no-ff <name>
8) git push origin <base>
9) Delete branch locally
10) Delete branch on origin (best effort)
"""
# 1) Determine which branch to close
# 1) Determine which branch should be closed
if not name:
try:
name = get_current_branch(cwd=cwd)
@@ -136,7 +159,7 @@ def close_branch(
if not name:
raise RuntimeError("Branch name must not be empty.")
# 2) Resolve base branch (main/master)
# 2) Resolve base branch
target_base = _resolve_base_branch(base_branch, fallback_base, cwd=cwd)
if name == target_base:
@@ -145,7 +168,7 @@ def close_branch(
"Please specify a feature branch."
)
# 3) Confirmation prompt
# 3) Ask user for confirmation
prompt = (
f"Merge branch '{name}' into '{target_base}' and delete it afterwards? "
"(y/N): "
@@ -163,7 +186,7 @@ def close_branch(
f"Failed to fetch from origin before closing branch {name!r}: {exc}"
) from exc
# 5) Checkout base branch
# 5) Checkout base
try:
run_git(["checkout", target_base], cwd=cwd)
except GitError as exc:
@@ -171,7 +194,7 @@ def close_branch(
f"Failed to checkout base branch {target_base!r}: {exc}"
) from exc
# 6) Pull latest base
# 6) Pull latest base state
try:
run_git(["pull", "origin", target_base], cwd=cwd)
except GitError as exc:
@@ -179,7 +202,7 @@ def close_branch(
f"Failed to pull latest changes for base branch {target_base!r}: {exc}"
) from exc
# 7) Merge feature branch into base
# 7) Merge the feature branch
try:
run_git(["merge", "--no-ff", name], cwd=cwd)
except GitError as exc:
@@ -192,22 +215,21 @@ def close_branch(
run_git(["push", "origin", target_base], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to push base branch {target_base!r} to origin after merge: {exc}"
f"Failed to push base branch {target_base!r} after merge: {exc}"
) from exc
# 9) Delete feature branch locally
# 9) Delete branch locally
try:
run_git(["branch", "-d", name], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to delete local branch {name!r} after merge: {exc}"
f"Failed to delete local branch {name!r}: {exc}"
) from exc
# 10) Delete feature branch on origin (best effort)
# 10) Delete branch on origin (best effort)
try:
run_git(["push", "origin", "--delete", name], cwd=cwd)
except GitError as exc:
# Remote delete is nice-to-have; surface as RuntimeError for clarity.
raise RuntimeError(
f"Branch {name!r} was deleted locally, but remote deletion failed: {exc}"
) from exc

View File

@@ -13,7 +13,7 @@ from __future__ import annotations
from typing import Optional
from pkgmgr.git_utils import run_git, GitError
from pkgmgr.core.git import run_git, GitError
def generate_changelog(

View File

@@ -0,0 +1,181 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Initialize user configuration by scanning the repositories base directory.
This module scans the path:
defaults_config["directories"]["repositories"]
with the expected structure:
{base}/{provider}/{account}/{repository}
For each discovered repository, the function:
• derives provider, account, repository from the folder structure
• (optionally) determines the latest commit hash via git log
• generates a unique CLI alias
• marks ignore=True for newly discovered repos
• skips repos already known in defaults or user config
"""
from __future__ import annotations
import os
import subprocess
from typing import Any, Dict
from pkgmgr.core.command.alias import generate_alias
from pkgmgr.core.config.save import save_user_config
def config_init(
user_config: Dict[str, Any],
defaults_config: Dict[str, Any],
bin_dir: str,
user_config_path: str,
) -> None:
"""
Scan the repositories base directory and add missing entries
to the user configuration.
"""
# ------------------------------------------------------------
# Announce where we will write the result
# ------------------------------------------------------------
print("============================================================")
print(f"[INIT] Writing user configuration to:")
print(f" {user_config_path}")
print("============================================================")
repositories_base_dir = os.path.expanduser(
defaults_config["directories"]["repositories"]
)
print(f"[INIT] Scanning repository base directory:")
print(f" {repositories_base_dir}")
print("")
if not os.path.isdir(repositories_base_dir):
print(f"[ERROR] Base directory does not exist: {repositories_base_dir}")
return
default_keys = {
(entry.get("provider"), entry.get("account"), entry.get("repository"))
for entry in defaults_config.get("repositories", [])
}
existing_keys = {
(entry.get("provider"), entry.get("account"), entry.get("repository"))
for entry in user_config.get("repositories", [])
}
existing_aliases = {
entry.get("alias")
for entry in user_config.get("repositories", [])
if entry.get("alias")
}
new_entries = []
scanned = 0
skipped = 0
# ------------------------------------------------------------
# Actual scanning
# ------------------------------------------------------------
for provider in os.listdir(repositories_base_dir):
provider_path = os.path.join(repositories_base_dir, provider)
if not os.path.isdir(provider_path):
continue
print(f"[SCAN] Provider: {provider}")
for account in os.listdir(provider_path):
account_path = os.path.join(provider_path, account)
if not os.path.isdir(account_path):
continue
print(f"[SCAN] Account: {account}")
for repo_name in os.listdir(account_path):
repo_path = os.path.join(account_path, repo_name)
if not os.path.isdir(repo_path):
continue
scanned += 1
key = (provider, account, repo_name)
# Already known?
if key in default_keys:
skipped += 1
print(f"[SKIP] (defaults) {provider}/{account}/{repo_name}")
continue
if key in existing_keys:
skipped += 1
print(f"[SKIP] (user-config) {provider}/{account}/{repo_name}")
continue
print(f"[ADD] {provider}/{account}/{repo_name}")
# Determine commit hash
try:
result = subprocess.run(
["git", "log", "-1", "--format=%H"],
cwd=repo_path,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
check=True,
)
verified = result.stdout.strip()
print(f"[INFO] Latest commit: {verified}")
except Exception as exc:
verified = ""
print(f"[WARN] Could not read commit: {exc}")
entry = {
"provider": provider,
"account": account,
"repository": repo_name,
"verified": {"commit": verified},
"ignore": True,
}
# Alias generation
alias = generate_alias(
{
"repository": repo_name,
"provider": provider,
"account": account,
},
bin_dir,
existing_aliases,
)
entry["alias"] = alias
existing_aliases.add(alias)
print(f"[INFO] Alias generated: {alias}")
new_entries.append(entry)
print("") # blank line between accounts
# ------------------------------------------------------------
# Summary
# ------------------------------------------------------------
print("============================================================")
print(f"[DONE] Scanned repositories: {scanned}")
print(f"[DONE] Skipped (known): {skipped}")
print(f"[DONE] New entries discovered: {len(new_entries)}")
print("============================================================")
# ------------------------------------------------------------
# Save if needed
# ------------------------------------------------------------
if new_entries:
user_config.setdefault("repositories", []).extend(new_entries)
save_user_config(user_config, user_config_path)
print(f"[SAVE] Wrote user configuration to:")
print(f" {user_config_path}")
else:
print("[INFO] No new repositories were added.")
print("============================================================")

View File

@@ -1,5 +1,5 @@
import yaml
from .load_config import load_config
from pkgmgr.core.config.load import load_config
def show_config(selected_repos, user_config_path, full_config=False):
"""Display configuration for one or more repositories, or the entire merged config."""

View File

@@ -0,0 +1,218 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
High-level entry point for repository installation.
Responsibilities:
- Ensure the repository directory exists (clone if necessary).
- Verify the repository (GPG / commit checks).
- Build a RepoContext object.
- Delegate the actual installation decision logic to InstallationPipeline.
"""
from __future__ import annotations
import os
from typing import Any, Dict, List
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.repository.verify import verify_repository
from pkgmgr.actions.repository.clone import clone_repos
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.installers.os_packages import (
ArchPkgbuildInstaller,
DebianControlInstaller,
RpmSpecInstaller,
)
from pkgmgr.actions.install.installers.nix_flake import (
NixFlakeInstaller,
)
from pkgmgr.actions.install.installers.python import PythonInstaller
from pkgmgr.actions.install.installers.makefile import (
MakefileInstaller,
)
from pkgmgr.actions.install.pipeline import InstallationPipeline
Repository = Dict[str, Any]
# All available installers, in the order they should be considered.
INSTALLERS = [
ArchPkgbuildInstaller(),
DebianControlInstaller(),
RpmSpecInstaller(),
NixFlakeInstaller(),
PythonInstaller(),
MakefileInstaller(),
]
# ---------------------------------------------------------------------------
# Internal helpers
# ---------------------------------------------------------------------------
def _ensure_repo_dir(
repo: Repository,
repositories_base_dir: str,
all_repos: List[Repository],
preview: bool,
no_verification: bool,
clone_mode: str,
identifier: str,
) -> str | None:
"""
Compute and, if necessary, clone the repository directory.
Returns the absolute repository path or None if cloning ultimately failed.
"""
repo_dir = get_repo_dir(repositories_base_dir, repo)
if not os.path.exists(repo_dir):
print(
f"Repository directory '{repo_dir}' does not exist. "
f"Cloning it now..."
)
clone_repos(
[repo],
repositories_base_dir,
all_repos,
preview,
no_verification,
clone_mode,
)
if not os.path.exists(repo_dir):
print(
f"Cloning failed for repository {identifier}. "
f"Skipping installation."
)
return None
return repo_dir
def _verify_repo(
repo: Repository,
repo_dir: str,
no_verification: bool,
identifier: str,
) -> bool:
"""
Verify a repository using the configured verification data.
Returns True if verification is considered okay and installation may continue.
"""
verified_info = repo.get("verified")
verified_ok, errors, _commit_hash, _signing_key = verify_repository(
repo,
repo_dir,
mode="local",
no_verification=no_verification,
)
if not no_verification and verified_info and not verified_ok:
print(f"Warning: Verification failed for {identifier}:")
for err in errors:
print(f" - {err}")
choice = input("Continue anyway? [y/N]: ").strip().lower()
if choice != "y":
print(f"Skipping installation for {identifier}.")
return False
return True
def _create_context(
repo: Repository,
identifier: str,
repo_dir: str,
repositories_base_dir: str,
bin_dir: str,
all_repos: List[Repository],
no_verification: bool,
preview: bool,
quiet: bool,
clone_mode: str,
update_dependencies: bool,
) -> RepoContext:
"""
Build a RepoContext instance for the given repository.
"""
return RepoContext(
repo=repo,
identifier=identifier,
repo_dir=repo_dir,
repositories_base_dir=repositories_base_dir,
bin_dir=bin_dir,
all_repos=all_repos,
no_verification=no_verification,
preview=preview,
quiet=quiet,
clone_mode=clone_mode,
update_dependencies=update_dependencies,
)
# ---------------------------------------------------------------------------
# Public API
# ---------------------------------------------------------------------------
def install_repos(
selected_repos: List[Repository],
repositories_base_dir: str,
bin_dir: str,
all_repos: List[Repository],
no_verification: bool,
preview: bool,
quiet: bool,
clone_mode: str,
update_dependencies: bool,
) -> None:
"""
Install one or more repositories according to the configured installers
and the CLI layer precedence rules.
"""
pipeline = InstallationPipeline(INSTALLERS)
for repo in selected_repos:
identifier = get_repo_identifier(repo, all_repos)
repo_dir = _ensure_repo_dir(
repo=repo,
repositories_base_dir=repositories_base_dir,
all_repos=all_repos,
preview=preview,
no_verification=no_verification,
clone_mode=clone_mode,
identifier=identifier,
)
if not repo_dir:
continue
if not _verify_repo(
repo=repo,
repo_dir=repo_dir,
no_verification=no_verification,
identifier=identifier,
):
continue
ctx = _create_context(
repo=repo,
identifier=identifier,
repo_dir=repo_dir,
repositories_base_dir=repositories_base_dir,
bin_dir=bin_dir,
all_repos=all_repos,
no_verification=no_verification,
preview=preview,
quiet=quiet,
clone_mode=clone_mode,
update_dependencies=update_dependencies,
)
pipeline.run(ctx)

View File

@@ -38,7 +38,7 @@ from abc import ABC, abstractmethod
from typing import Iterable, TYPE_CHECKING
if TYPE_CHECKING:
from pkgmgr.context import RepoContext
from pkgmgr.actions.install.context import RepoContext
# ---------------------------------------------------------------------------

View File

@@ -0,0 +1,19 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Installer package for pkgmgr.
This exposes all installer classes so users can import them directly from
pkgmgr.actions.install.installers.
"""
from pkgmgr.actions.install.installers.base import BaseInstaller # noqa: F401
from pkgmgr.actions.install.installers.nix_flake import NixFlakeInstaller # noqa: F401
from pkgmgr.actions.install.installers.python import PythonInstaller # noqa: F401
from pkgmgr.actions.install.installers.makefile import MakefileInstaller # noqa: F401
# OS-specific installers
from pkgmgr.actions.install.installers.os_packages.arch_pkgbuild import ArchPkgbuildInstaller # noqa: F401
from pkgmgr.actions.install.installers.os_packages.debian_control import DebianControlInstaller # noqa: F401
from pkgmgr.actions.install.installers.os_packages.rpm_spec import RpmSpecInstaller # noqa: F401

View File

@@ -8,8 +8,8 @@ Base interface for all installer components in the pkgmgr installation pipeline.
from abc import ABC, abstractmethod
from typing import Set
from pkgmgr.context import RepoContext
from pkgmgr.capabilities import CAPABILITY_MATCHERS
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.capabilities import CAPABILITY_MATCHERS
class BaseInstaller(ABC):

View File

@@ -0,0 +1,97 @@
from __future__ import annotations
import os
import re
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.installers.base import BaseInstaller
from pkgmgr.core.command.run import run_command
class MakefileInstaller(BaseInstaller):
"""
Generic installer that runs `make install` if a Makefile with an
install target is present.
Safety rules:
- If PKGMGR_DISABLE_MAKEFILE_INSTALLER=1 is set, this installer
is globally disabled.
- The higher-level InstallationPipeline ensures that Makefile
installation does not run if a stronger CLI layer already owns
the command (e.g. Nix or OS packages).
"""
layer = "makefile"
MAKEFILE_NAME = "Makefile"
def supports(self, ctx: RepoContext) -> bool:
"""
Return True if this repository has a Makefile and the installer
is not globally disabled.
"""
# Optional global kill switch.
if os.environ.get("PKGMGR_DISABLE_MAKEFILE_INSTALLER") == "1":
if not ctx.quiet:
print(
"[INFO] MakefileInstaller is disabled via "
"PKGMGR_DISABLE_MAKEFILE_INSTALLER."
)
return False
makefile_path = os.path.join(ctx.repo_dir, self.MAKEFILE_NAME)
return os.path.exists(makefile_path)
def _has_install_target(self, makefile_path: str) -> bool:
"""
Heuristically check whether the Makefile defines an install target.
We look for:
- a plain 'install:' target, or
- any 'install-*:' style target.
"""
try:
with open(makefile_path, "r", encoding="utf-8", errors="ignore") as f:
content = f.read()
except OSError:
return False
# Simple heuristics: look for "install:" or targets starting with "install-"
if re.search(r"^install\s*:", content, flags=re.MULTILINE):
return True
if re.search(r"^install-[a-zA-Z0-9_-]*\s*:", content, flags=re.MULTILINE):
return True
return False
def run(self, ctx: RepoContext) -> None:
"""
Execute `make install` in the repository directory if an install
target exists.
"""
makefile_path = os.path.join(ctx.repo_dir, self.MAKEFILE_NAME)
if not os.path.exists(makefile_path):
if not ctx.quiet:
print(
f"[pkgmgr] Makefile '{makefile_path}' not found, "
"skipping MakefileInstaller."
)
return
if not self._has_install_target(makefile_path):
if not ctx.quiet:
print(
f"[pkgmgr] No 'install' target found in {makefile_path}."
)
return
if not ctx.quiet:
print(
f"[pkgmgr] Running 'make install' in {ctx.repo_dir} "
f"(MakefileInstaller)"
)
cmd = "make install"
run_command(cmd, cwd=ctx.repo_dir, preview=ctx.preview)

View File

@@ -0,0 +1,160 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Installer for Nix flakes.
If a repository contains flake.nix and the 'nix' command is available, this
installer will try to install profile outputs from the flake.
Behavior:
- If flake.nix is present and `nix` exists on PATH:
* First remove any existing `package-manager` profile entry (best-effort).
* Then install one or more flake outputs via `nix profile install`.
- For the package-manager repo:
* `pkgmgr` is mandatory (CLI), `default` is optional.
- For all other repos:
* `default` is mandatory.
Special handling:
- If PKGMGR_DISABLE_NIX_FLAKE_INSTALLER=1 is set, the installer is
globally disabled (useful for CI or debugging).
The higher-level InstallationPipeline and CLI-layer model decide when this
installer is allowed to run, based on where the current CLI comes from
(e.g. Nix, OS packages, Python, Makefile).
"""
import os
import shutil
from typing import TYPE_CHECKING, List, Tuple
from pkgmgr.actions.install.installers.base import BaseInstaller
from pkgmgr.core.command.run import run_command
if TYPE_CHECKING:
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install import InstallContext
class NixFlakeInstaller(BaseInstaller):
"""Install Nix flake profiles for repositories that define flake.nix."""
# Logical layer name, used by capability matchers.
layer = "nix"
FLAKE_FILE = "flake.nix"
PROFILE_NAME = "package-manager"
def supports(self, ctx: "RepoContext") -> bool:
"""
Only support repositories that:
- Are NOT explicitly disabled via PKGMGR_DISABLE_NIX_FLAKE_INSTALLER=1,
- Have a flake.nix,
- And have the `nix` command available.
"""
# Optional global kill-switch for CI or debugging.
if os.environ.get("PKGMGR_DISABLE_NIX_FLAKE_INSTALLER") == "1":
print(
"[INFO] PKGMGR_DISABLE_NIX_FLAKE_INSTALLER=1 "
"NixFlakeInstaller is disabled."
)
return False
# Nix must be available.
if shutil.which("nix") is None:
return False
# flake.nix must exist in the repository.
flake_path = os.path.join(ctx.repo_dir, self.FLAKE_FILE)
return os.path.exists(flake_path)
def _ensure_old_profile_removed(self, ctx: "RepoContext") -> None:
"""
Best-effort removal of an existing profile entry.
This handles the "already provides the following file" conflict by
removing previous `package-manager` installations before we install
the new one.
Any error in `nix profile remove` is intentionally ignored, because
a missing profile entry is not a fatal condition.
"""
if shutil.which("nix") is None:
return
cmd = f"nix profile remove {self.PROFILE_NAME} || true"
try:
# NOTE: no allow_failure here → matches the existing unit tests
run_command(cmd, cwd=ctx.repo_dir, preview=ctx.preview)
except SystemExit:
# Unit tests explicitly assert this is swallowed
pass
def _profile_outputs(self, ctx: "RepoContext") -> List[Tuple[str, bool]]:
"""
Decide which flake outputs to install and whether failures are fatal.
Returns a list of (output_name, allow_failure) tuples.
Rules:
- For the package-manager repo (identifier 'pkgmgr' or 'package-manager'):
[("pkgmgr", False), ("default", True)]
- For all other repos:
[("default", False)]
"""
ident = ctx.identifier
if ident in {"pkgmgr", "package-manager"}:
# pkgmgr: main CLI output is "pkgmgr" (mandatory),
# "default" is nice-to-have (non-fatal).
return [("pkgmgr", False), ("default", True)]
# Generic repos: we expect a sensible "default" package/app.
# Failure to install it is considered fatal.
return [("default", False)]
def run(self, ctx: "InstallContext") -> None:
"""
Install Nix flake profile outputs.
For the package-manager repo, failure installing 'pkgmgr' is fatal,
failure installing 'default' is non-fatal.
For other repos, failure installing 'default' is fatal.
"""
# Reuse supports() to keep logic in one place.
if not self.supports(ctx): # type: ignore[arg-type]
return
outputs = self._profile_outputs(ctx) # list of (name, allow_failure)
print(
"Nix flake detected in "
f"{ctx.identifier}, attempting to install profile outputs: "
+ ", ".join(name for name, _ in outputs)
)
# Handle the "already installed" case up-front for the shared profile.
self._ensure_old_profile_removed(ctx) # type: ignore[arg-type]
for output, allow_failure in outputs:
cmd = f"nix profile install {ctx.repo_dir}#{output}"
try:
run_command(
cmd,
cwd=ctx.repo_dir,
preview=ctx.preview,
allow_failure=allow_failure,
)
print(f"Nix flake output '{output}' successfully installed.")
except SystemExit as e:
print(f"[Error] Failed to install Nix flake output '{output}': {e}")
if not allow_failure:
# Mandatory output failed → fatal for the pipeline.
raise
# Optional output failed → log and continue.
print(
"[Warning] Continuing despite failure to install "
f"optional output '{output}'."
)

View File

@@ -3,9 +3,9 @@
import os
import shutil
from pkgmgr.context import RepoContext
from pkgmgr.installers.base import BaseInstaller
from pkgmgr.run_command import run_command
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.installers.base import BaseInstaller
from pkgmgr.core.command.run import run_command
class ArchPkgbuildInstaller(BaseInstaller):

View File

@@ -17,12 +17,11 @@ apt/dpkg tooling are available.
import glob
import os
import shutil
from typing import List
from pkgmgr.context import RepoContext
from pkgmgr.installers.base import BaseInstaller
from pkgmgr.run_command import run_command
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.installers.base import BaseInstaller
from pkgmgr.core.command.run import run_command
class DebianControlInstaller(BaseInstaller):
@@ -68,6 +67,32 @@ class DebianControlInstaller(BaseInstaller):
pattern = os.path.join(parent, "*.deb")
return sorted(glob.glob(pattern))
def _privileged_prefix(self) -> str | None:
"""
Determine how to run privileged commands:
- If 'sudo' is available, return 'sudo '.
- If we are running as root (e.g. inside CI/container), return ''.
- Otherwise, return None, meaning we cannot safely elevate.
Callers are responsible for handling the None case (usually by
warning and skipping automatic installation).
"""
sudo_path = shutil.which("sudo")
is_root = False
try:
is_root = os.geteuid() == 0
except AttributeError: # pragma: no cover - non-POSIX platforms
# On non-POSIX systems, fall back to assuming "not root".
is_root = False
if sudo_path is not None:
return "sudo "
if is_root:
return ""
return None
def _install_build_dependencies(self, ctx: RepoContext) -> None:
"""
Install build dependencies using `apt-get build-dep ./`.
@@ -86,12 +111,25 @@ class DebianControlInstaller(BaseInstaller):
)
return
prefix = self._privileged_prefix()
if prefix is None:
print(
"[Warning] Neither 'sudo' is available nor running as root. "
"Skipping automatic build-dep installation for Debian. "
"Please install build dependencies from debian/control manually."
)
return
# Update package lists first for reliable build-dep resolution.
run_command("sudo apt-get update", cwd=ctx.repo_dir, preview=ctx.preview)
run_command(
f"{prefix}apt-get update",
cwd=ctx.repo_dir,
preview=ctx.preview,
)
# Install build dependencies based on debian/control in the current tree.
# `apt-get build-dep ./` uses the source in the current directory.
builddep_cmd = "sudo apt-get build-dep -y ./"
builddep_cmd = f"{prefix}apt-get build-dep -y ./"
run_command(builddep_cmd, cwd=ctx.repo_dir, preview=ctx.preview)
def run(self, ctx: RepoContext) -> None:
@@ -101,7 +139,7 @@ class DebianControlInstaller(BaseInstaller):
Steps:
1. apt-get build-dep ./ (automatic build dependency installation)
2. dpkg-buildpackage -b -us -uc
3. sudo dpkg -i ../*.deb
3. sudo dpkg -i ../*.deb (or plain dpkg -i when running as root)
"""
control_path = self._control_path(ctx)
if not os.path.exists(control_path):
@@ -123,7 +161,17 @@ class DebianControlInstaller(BaseInstaller):
)
return
prefix = self._privileged_prefix()
if prefix is None:
print(
"[Warning] Neither 'sudo' is available nor running as root. "
"Skipping automatic .deb installation. "
"You can manually install the following files with dpkg -i:\n "
+ "\n ".join(debs)
)
return
# 4) Install .deb files
install_cmd = "sudo dpkg -i " + " ".join(os.path.basename(d) for d in debs)
install_cmd = prefix + "dpkg -i " + " ".join(os.path.basename(d) for d in debs)
parent = os.path.dirname(ctx.repo_dir)
run_command(install_cmd, cwd=parent, preview=ctx.preview)

View File

@@ -0,0 +1,282 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Installer for RPM-based packages defined in *.spec files.
This installer:
1. Installs build dependencies via dnf/yum builddep (where available)
2. Prepares a source tarball in ~/rpmbuild/SOURCES based on the .spec
3. Uses rpmbuild to build RPMs from the provided .spec file
4. Installs the resulting RPMs via the system package manager (dnf/yum)
or rpm as a fallback.
It targets RPM-based systems (Fedora / RHEL / CentOS / Rocky / Alma, etc.).
"""
import glob
import os
import shutil
import tarfile
from typing import List, Optional, Tuple
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.installers.base import BaseInstaller
from pkgmgr.core.command.run import run_command
class RpmSpecInstaller(BaseInstaller):
"""
Build and install RPM-based packages from *.spec files.
This installer is responsible for the full build + install of the
application on RPM-like systems.
"""
# Logical layer name, used by capability matchers.
layer = "os-packages"
def _is_rpm_like(self) -> bool:
"""
Basic RPM-like detection:
- rpmbuild must be available
- at least one of dnf / yum / yum-builddep must be present
"""
if shutil.which("rpmbuild") is None:
return False
has_dnf = shutil.which("dnf") is not None
has_yum = shutil.which("yum") is not None
has_yum_builddep = shutil.which("yum-builddep") is not None
return has_dnf or has_yum or has_yum_builddep
def _spec_path(self, ctx: RepoContext) -> Optional[str]:
"""Return the first *.spec file in the repository root, if any."""
pattern = os.path.join(ctx.repo_dir, "*.spec")
matches = sorted(glob.glob(pattern))
if not matches:
return None
return matches[0]
# ------------------------------------------------------------------
# Helpers for preparing rpmbuild topdir and source tarball
# ------------------------------------------------------------------
def _rpmbuild_topdir(self) -> str:
"""
Return the rpmbuild topdir that rpmbuild will use by default.
By default this is: ~/rpmbuild
In the self-install tests, $HOME is set to /tmp/pkgmgr-self-install,
so this becomes /tmp/pkgmgr-self-install/rpmbuild which matches the
paths in the RPM build logs.
"""
home = os.path.expanduser("~")
return os.path.join(home, "rpmbuild")
def _ensure_rpmbuild_tree(self, topdir: str) -> None:
"""
Ensure the standard rpmbuild directory tree exists:
<topdir>/
BUILD/
BUILDROOT/
RPMS/
SOURCES/
SPECS/
SRPMS/
"""
for sub in ("BUILD", "BUILDROOT", "RPMS", "SOURCES", "SPECS", "SRPMS"):
os.makedirs(os.path.join(topdir, sub), exist_ok=True)
def _parse_name_version(self, spec_path: str) -> Optional[Tuple[str, str]]:
"""
Parse Name and Version from the given .spec file.
Returns (name, version) or None if either cannot be determined.
"""
name = None
version = None
with open(spec_path, "r", encoding="utf-8") as f:
for raw_line in f:
line = raw_line.strip()
# Ignore comments
if not line or line.startswith("#"):
continue
lower = line.lower()
if lower.startswith("name:"):
# e.g. "Name: package-manager"
parts = line.split(":", 1)
if len(parts) == 2:
name = parts[1].strip()
elif lower.startswith("version:"):
# e.g. "Version: 0.7.7"
parts = line.split(":", 1)
if len(parts) == 2:
version = parts[1].strip()
if name and version:
break
if not name or not version:
print(
"[Warning] Could not determine Name/Version from spec file "
f"'{spec_path}'. Skipping RPM source tarball preparation."
)
return None
return name, version
def _prepare_source_tarball(self, ctx: RepoContext, spec_path: str) -> None:
"""
Prepare a source tarball in <HOME>/rpmbuild/SOURCES that matches
the Name/Version in the .spec file.
"""
parsed = self._parse_name_version(spec_path)
if parsed is None:
return
name, version = parsed
topdir = self._rpmbuild_topdir()
self._ensure_rpmbuild_tree(topdir)
build_dir = os.path.join(topdir, "BUILD")
sources_dir = os.path.join(topdir, "SOURCES")
source_root = os.path.join(build_dir, f"{name}-{version}")
tarball_path = os.path.join(sources_dir, f"{name}-{version}.tar.gz")
# Clean any previous build directory for this name/version.
if os.path.exists(source_root):
shutil.rmtree(source_root)
# Copy the repository tree into BUILD/<name>-<version>.
shutil.copytree(ctx.repo_dir, source_root)
# Create the tarball with the top-level directory <name>-<version>.
if os.path.exists(tarball_path):
os.remove(tarball_path)
with tarfile.open(tarball_path, "w:gz") as tar:
tar.add(source_root, arcname=f"{name}-{version}")
print(
f"[INFO] Prepared RPM source tarball at '{tarball_path}' "
f"from '{ctx.repo_dir}'."
)
# ------------------------------------------------------------------
def supports(self, ctx: RepoContext) -> bool:
"""
This installer is supported if:
- we are on an RPM-based system (rpmbuild + dnf/yum/yum-builddep available), and
- a *.spec file exists in the repository root.
"""
if not self._is_rpm_like():
return False
return self._spec_path(ctx) is not None
def _find_built_rpms(self) -> List[str]:
"""
Find RPMs built by rpmbuild.
By default, rpmbuild outputs RPMs into:
~/rpmbuild/RPMS/*/*.rpm
"""
topdir = self._rpmbuild_topdir()
pattern = os.path.join(topdir, "RPMS", "**", "*.rpm")
return sorted(glob.glob(pattern, recursive=True))
def _install_build_dependencies(self, ctx: RepoContext, spec_path: str) -> None:
"""
Install build dependencies for the given .spec file.
"""
spec_basename = os.path.basename(spec_path)
if shutil.which("dnf") is not None:
cmd = f"sudo dnf builddep -y {spec_basename}"
elif shutil.which("yum-builddep") is not None:
cmd = f"sudo yum-builddep -y {spec_basename}"
elif shutil.which("yum") is not None:
cmd = f"sudo yum-builddep -y {spec_basename}"
else:
print(
"[Warning] No suitable RPM builddep tool (dnf/yum-builddep/yum) found. "
"Skipping automatic build dependency installation for RPM."
)
return
run_command(cmd, cwd=ctx.repo_dir, preview=ctx.preview)
def _install_built_rpms(self, ctx: RepoContext, rpms: List[str]) -> None:
"""
Install or upgrade the built RPMs.
Strategy:
- Prefer dnf install -y <rpms> (handles upgrades cleanly)
- Else yum install -y <rpms>
- Else fallback to rpm -Uvh <rpms> (upgrade/replace existing)
"""
if not rpms:
print(
"[Warning] No RPM files found after rpmbuild. "
"Skipping RPM package installation."
)
return
dnf = shutil.which("dnf")
yum = shutil.which("yum")
rpm = shutil.which("rpm")
if dnf is not None:
install_cmd = "sudo dnf install -y " + " ".join(rpms)
elif yum is not None:
install_cmd = "sudo yum install -y " + " ".join(rpms)
elif rpm is not None:
# Fallback: use rpm in upgrade mode so an existing older
# version is replaced instead of causing file conflicts.
install_cmd = "sudo rpm -Uvh " + " ".join(rpms)
else:
print(
"[Warning] No suitable RPM installer (dnf/yum/rpm) found. "
"Cannot install built RPMs."
)
return
run_command(install_cmd, cwd=ctx.repo_dir, preview=ctx.preview)
def run(self, ctx: RepoContext) -> None:
"""
Build and install RPM-based packages.
Steps:
1. Prepare source tarball in ~/rpmbuild/SOURCES matching Name/Version
2. dnf/yum builddep <spec> (automatic build dependency installation)
3. rpmbuild -ba path/to/spec
4. Install built RPMs via dnf/yum (or rpm as fallback)
"""
spec_path = self._spec_path(ctx)
if not spec_path:
return
# 1) Prepare source tarball so rpmbuild finds Source0 in SOURCES.
self._prepare_source_tarball(ctx, spec_path)
# 2) Install build dependencies
self._install_build_dependencies(ctx, spec_path)
# 3) Build RPMs
spec_basename = os.path.basename(spec_path)
build_cmd = f"rpmbuild -ba {spec_basename}"
run_command(build_cmd, cwd=ctx.repo_dir, preview=ctx.preview)
# 4) Find and install built RPMs
rpms = self._find_built_rpms()
self._install_built_rpms(ctx, rpms)

View File

@@ -0,0 +1,139 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
PythonInstaller — install Python projects defined via pyproject.toml.
Installation rules:
1. pip command resolution:
a) If PKGMGR_PIP is set → use it exactly as provided.
b) Else if running inside a virtualenv → use `sys.executable -m pip`.
c) Else → create/use a per-repository virtualenv under ~/.venvs/<repo>/.
2. Installation target:
- Always install into the resolved pip environment.
- Never modify system Python, never rely on --user.
- Nix-immutable systems (PEP 668) are automatically avoided because we
never touch system Python.
3. The installer is skipped when:
- PKGMGR_DISABLE_PYTHON_INSTALLER=1 is set.
- The repository has no pyproject.toml.
All pip failures are treated as fatal.
"""
from __future__ import annotations
import os
import sys
import subprocess
from typing import TYPE_CHECKING
from pkgmgr.actions.install.installers.base import BaseInstaller
from pkgmgr.core.command.run import run_command
if TYPE_CHECKING:
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install import InstallContext
class PythonInstaller(BaseInstaller):
"""Install Python projects and dependencies via pip using isolated environments."""
layer = "python"
# ----------------------------------------------------------------------
# Installer activation logic
# ----------------------------------------------------------------------
def supports(self, ctx: "RepoContext") -> bool:
"""
Return True if this installer should handle this repository.
The installer is active only when:
- A pyproject.toml exists in the repo, and
- PKGMGR_DISABLE_PYTHON_INSTALLER is not set.
"""
if os.environ.get("PKGMGR_DISABLE_PYTHON_INSTALLER") == "1":
print("[INFO] PythonInstaller disabled via PKGMGR_DISABLE_PYTHON_INSTALLER.")
return False
return os.path.exists(os.path.join(ctx.repo_dir, "pyproject.toml"))
# ----------------------------------------------------------------------
# Virtualenv handling
# ----------------------------------------------------------------------
def _in_virtualenv(self) -> bool:
"""Detect whether the current interpreter is inside a venv."""
if os.environ.get("VIRTUAL_ENV"):
return True
base = getattr(sys, "base_prefix", sys.prefix)
return sys.prefix != base
def _ensure_repo_venv(self, ctx: "InstallContext") -> str:
"""
Ensure that ~/.venvs/<identifier>/ exists and contains a minimal venv.
Returns the venv directory path.
"""
venv_dir = os.path.expanduser(f"~/.venvs/{ctx.identifier}")
python = sys.executable
if not os.path.isdir(venv_dir):
print(f"[python-installer] Creating virtualenv: {venv_dir}")
subprocess.check_call([python, "-m", "venv", venv_dir])
return venv_dir
# ----------------------------------------------------------------------
# pip command resolution
# ----------------------------------------------------------------------
def _pip_cmd(self, ctx: "InstallContext") -> str:
"""
Determine which pip command to use.
Priority:
1. PKGMGR_PIP override given by user or automation.
2. Active virtualenv → use sys.executable -m pip.
3. Per-repository venv → ~/.venvs/<repo>/bin/pip
"""
explicit = os.environ.get("PKGMGR_PIP", "").strip()
if explicit:
return explicit
if self._in_virtualenv():
return f"{sys.executable} -m pip"
venv_dir = self._ensure_repo_venv(ctx)
pip_path = os.path.join(venv_dir, "bin", "pip")
return pip_path
# ----------------------------------------------------------------------
# Execution
# ----------------------------------------------------------------------
def run(self, ctx: "InstallContext") -> None:
"""
Install the project defined by pyproject.toml.
Uses the resolved pip environment. Installation is isolated and never
touches system Python.
"""
if not self.supports(ctx): # type: ignore[arg-type]
return
pyproject = os.path.join(ctx.repo_dir, "pyproject.toml")
if not os.path.exists(pyproject):
return
print(f"[python-installer] Installing Python project for {ctx.identifier}...")
pip_cmd = self._pip_cmd(ctx)
# Final install command: ALWAYS isolated, never system-wide.
install_cmd = f"{pip_cmd} install ."
run_command(install_cmd, cwd=ctx.repo_dir, preview=ctx.preview)
print(f"[python-installer] Installation finished for {ctx.identifier}.")

View File

@@ -0,0 +1,91 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
CLI layer model for the pkgmgr installation pipeline.
We treat CLI entry points as coming from one of four conceptual layers:
- os-packages : system package managers (pacman/apt/dnf/…)
- nix : Nix flake / nix profile
- python : pip / virtualenv / user-local scripts
- makefile : repo-local Makefile / scripts inside the repo
The layer order defines precedence: higher layers "own" the CLI and
lower layers will not be executed once a higher-priority CLI exists.
"""
from __future__ import annotations
import os
from enum import Enum
from typing import Optional
class CliLayer(str, Enum):
OS_PACKAGES = "os-packages"
NIX = "nix"
PYTHON = "python"
MAKEFILE = "makefile"
# Highest priority first
CLI_LAYERS: list[CliLayer] = [
CliLayer.OS_PACKAGES,
CliLayer.NIX,
CliLayer.PYTHON,
CliLayer.MAKEFILE,
]
def layer_priority(layer: Optional[CliLayer]) -> int:
"""
Return a numeric priority index for a given layer.
Lower index → higher priority.
Unknown / None → very low priority.
"""
if layer is None:
return len(CLI_LAYERS)
try:
return CLI_LAYERS.index(layer)
except ValueError:
return len(CLI_LAYERS)
def classify_command_layer(command: str, repo_dir: str) -> CliLayer:
"""
Heuristically classify a resolved command path into a CLI layer.
Rules (best effort):
- /usr/... or /bin/... → os-packages
- /nix/store/... or ~/.nix-profile → nix
- ~/.local/bin/... → python
- inside repo_dir → makefile
- everything else → python (user/venv scripts, etc.)
"""
command_abs = os.path.abspath(os.path.expanduser(command))
repo_abs = os.path.abspath(repo_dir)
home = os.path.expanduser("~")
# OS package managers
if command_abs.startswith("/usr/") or command_abs.startswith("/bin/"):
return CliLayer.OS_PACKAGES
# Nix store / profile
if command_abs.startswith("/nix/store/") or command_abs.startswith(
os.path.join(home, ".nix-profile")
):
return CliLayer.NIX
# User-local bin
if command_abs.startswith(os.path.join(home, ".local", "bin")):
return CliLayer.PYTHON
# Inside the repository → usually a Makefile/script
if command_abs.startswith(repo_abs):
return CliLayer.MAKEFILE
# Fallback: treat as Python-style/user-level script
return CliLayer.PYTHON

View File

@@ -0,0 +1,257 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Installation pipeline orchestration for repositories.
This module implements the "Setup Controller" logic:
1. Detect current CLI command for the repo (if any).
2. Classify it into a layer (os-packages, nix, python, makefile).
3. Iterate over installers in layer order:
- Skip installers whose layer is weaker than an already-loaded one.
- Run only installers that support() the repo and add new capabilities.
- After each installer, re-resolve the command and update the layer.
4. Maintain the repo["command"] field and create/update symlinks via create_ink().
The goal is to prevent conflicting installations and make the layering
behaviour explicit and testable.
"""
from __future__ import annotations
from dataclasses import dataclass
from typing import Optional, Sequence, Set
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.installers.base import BaseInstaller
from pkgmgr.actions.install.layers import (
CliLayer,
classify_command_layer,
layer_priority,
)
from pkgmgr.core.command.ink import create_ink
from pkgmgr.core.command.resolve import resolve_command_for_repo
@dataclass
class CommandState:
"""
Represents the current CLI state for a repository:
- command: absolute or relative path to the CLI entry point
- layer: which conceptual layer this command belongs to
"""
command: Optional[str]
layer: Optional[CliLayer]
class CommandResolver:
"""
Small helper responsible for resolving the current command for a repo
and mapping it into a CommandState.
"""
def __init__(self, ctx: RepoContext) -> None:
self._ctx = ctx
def resolve(self) -> CommandState:
"""
Resolve the current command for this repository.
If resolve_command_for_repo raises SystemExit (e.g. Python package
without installed entry point), we treat this as "no command yet"
from the point of view of the installers.
"""
repo = self._ctx.repo
identifier = self._ctx.identifier
repo_dir = self._ctx.repo_dir
try:
cmd = resolve_command_for_repo(
repo=repo,
repo_identifier=identifier,
repo_dir=repo_dir,
)
except SystemExit:
cmd = None
if not cmd:
return CommandState(command=None, layer=None)
layer = classify_command_layer(cmd, repo_dir)
return CommandState(command=cmd, layer=layer)
class InstallationPipeline:
"""
High-level orchestrator that applies a sequence of installers
to a repository based on CLI layer precedence.
"""
def __init__(self, installers: Sequence[BaseInstaller]) -> None:
self._installers = list(installers)
# ------------------------------------------------------------------
# Public API
# ------------------------------------------------------------------
def run(self, ctx: RepoContext) -> None:
"""
Execute the installation pipeline for a single repository.
- Detect initial command & layer.
- Optionally create a symlink.
- Run installers in order, skipping those whose layer is weaker
than an already-loaded CLI.
- After each installer, re-resolve the command and refresh the
symlink if needed.
"""
repo = ctx.repo
repo_dir = ctx.repo_dir
identifier = ctx.identifier
repositories_base_dir = ctx.repositories_base_dir
bin_dir = ctx.bin_dir
all_repos = ctx.all_repos
quiet = ctx.quiet
preview = ctx.preview
resolver = CommandResolver(ctx)
state = resolver.resolve()
# Persist initial command (if any) and create a symlink.
if state.command:
repo["command"] = state.command
create_ink(
repo,
repositories_base_dir,
bin_dir,
all_repos,
quiet=quiet,
preview=preview,
)
else:
repo.pop("command", None)
provided_capabilities: Set[str] = set()
# Main installer loop
for installer in self._installers:
layer_name = getattr(installer, "layer", None)
# Installers without a layer participate without precedence logic.
if layer_name is None:
self._run_installer(installer, ctx, identifier, repo_dir, quiet)
continue
try:
installer_layer = CliLayer(layer_name)
except ValueError:
# Unknown layer string → treat as lowest priority.
installer_layer = None
# "Previous/Current layer already loaded?"
if state.layer is not None and installer_layer is not None:
current_prio = layer_priority(state.layer)
installer_prio = layer_priority(installer_layer)
if current_prio < installer_prio:
# Current CLI comes from a higher-priority layer,
# so we skip this installer entirely.
if not quiet:
print(
f"[pkgmgr] Skipping installer "
f"{installer.__class__.__name__} for {identifier} "
f"CLI already provided by layer {state.layer.value!r}."
)
continue
if current_prio == installer_prio:
# Same layer already provides a CLI; usually there is no
# need to run another installer on top of it.
if not quiet:
print(
f"[pkgmgr] Skipping installer "
f"{installer.__class__.__name__} for {identifier} "
f"layer {installer_layer.value!r} is already loaded."
)
continue
# Check if this installer is applicable at all.
if not installer.supports(ctx):
continue
# Capabilities: if everything this installer would provide is already
# covered, we can safely skip it.
caps = installer.discover_capabilities(ctx)
if caps and caps.issubset(provided_capabilities):
if not quiet:
print(
f"Skipping installer {installer.__class__.__name__} "
f"for {identifier} capabilities {caps} already provided."
)
continue
if not quiet:
print(
f"[pkgmgr] Running installer {installer.__class__.__name__} "
f"for {identifier} in '{repo_dir}' "
f"(new capabilities: {caps or set()})..."
)
# Run the installer with error reporting.
self._run_installer(installer, ctx, identifier, repo_dir, quiet)
provided_capabilities.update(caps)
# After running an installer, re-resolve the command and layer.
new_state = resolver.resolve()
if new_state.command:
repo["command"] = new_state.command
create_ink(
repo,
repositories_base_dir,
bin_dir,
all_repos,
quiet=quiet,
preview=preview,
)
else:
repo.pop("command", None)
state = new_state
# ------------------------------------------------------------------
# Internal helpers
# ------------------------------------------------------------------
@staticmethod
def _run_installer(
installer: BaseInstaller,
ctx: RepoContext,
identifier: str,
repo_dir: str,
quiet: bool,
) -> None:
"""
Execute a single installer with unified error handling.
"""
try:
installer.run(ctx)
except SystemExit as exc:
exit_code = exc.code if isinstance(exc.code, int) else str(exc.code)
print(
f"[ERROR] Installer {installer.__class__.__name__} failed "
f"for repository {identifier} (dir: {repo_dir}) "
f"with exit code {exit_code}."
)
print(
"[ERROR] This usually means an underlying command failed "
"(e.g. 'make install', 'nix build', 'pip install', ...)."
)
print(
"[ERROR] Check the log above for the exact command output. "
"You can also run this repository in isolation via:\n"
f" pkgmgr install {identifier} "
"--clone-mode shallow --no-verification"
)
raise

View File

@@ -1,7 +1,7 @@
import os
from pkgmgr.get_repo_identifier import get_repo_identifier
from pkgmgr.get_repo_dir import get_repo_dir
from pkgmgr.run_command import run_command
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.command.run import run_command
import sys
def exec_proxy_command(proxy_prefix: str, selected_repos, repositories_base_dir, all_repos, proxy_command: str, extra_args, preview: bool):

View File

@@ -0,0 +1,310 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Release helper for pkgmgr (public entry point).
This package provides the high-level `release()` function used by the
pkgmgr CLI to perform versioned releases:
- Determine the next semantic version based on existing Git tags.
- Update pyproject.toml with the new version.
- Update additional packaging files (flake.nix, PKGBUILD,
debian/changelog, RPM spec) where present.
- Prepend a basic entry to CHANGELOG.md.
- Move the floating 'latest' tag to the newly created release tag so
the newest release is always marked as latest.
Additional behaviour:
- If `preview=True` (from --preview), no files are written and no
Git commands are executed. Instead, a detailed summary of the
planned changes and commands is printed.
- If `preview=False` and not forced, the release is executed in two
phases:
1) Preview-only run (dry-run).
2) Interactive confirmation, then real release if confirmed.
This confirmation can be skipped with the `force=True` flag.
- Before creating and pushing tags, main/master is updated from origin
when the release is performed on one of these branches.
- If `close=True` is used and the current branch is not main/master,
the branch will be closed via branch_commands.close_branch() after
a successful release.
"""
from __future__ import annotations
import os
import sys
from typing import Optional
from pkgmgr.core.git import get_current_branch, GitError
from pkgmgr.actions.branch import close_branch
from .versioning import determine_current_version, bump_semver
from .git_ops import run_git_command, sync_branch_with_remote, update_latest_tag
from .files import (
update_pyproject_version,
update_flake_version,
update_pkgbuild_version,
update_spec_version,
update_changelog,
update_debian_changelog,
update_spec_changelog,
)
# ---------------------------------------------------------------------------
# Internal implementation (single-phase, preview or real)
# ---------------------------------------------------------------------------
def _release_impl(
pyproject_path: str = "pyproject.toml",
changelog_path: str = "CHANGELOG.md",
release_type: str = "patch",
message: Optional[str] = None,
preview: bool = False,
close: bool = False,
) -> None:
"""
Internal implementation that performs a single-phase release.
"""
current_ver = determine_current_version()
new_ver = bump_semver(current_ver, release_type)
new_ver_str = str(new_ver)
new_tag = new_ver.to_tag(with_prefix=True)
mode = "PREVIEW" if preview else "REAL"
print(f"Release mode: {mode}")
print(f"Current version: {current_ver}")
print(f"New version: {new_ver_str} ({release_type})")
repo_root = os.path.dirname(os.path.abspath(pyproject_path))
# Update core project metadata and packaging files
update_pyproject_version(pyproject_path, new_ver_str, preview=preview)
changelog_message = update_changelog(
changelog_path,
new_ver_str,
message=message,
preview=preview,
)
flake_path = os.path.join(repo_root, "flake.nix")
update_flake_version(flake_path, new_ver_str, preview=preview)
pkgbuild_path = os.path.join(repo_root, "PKGBUILD")
update_pkgbuild_version(pkgbuild_path, new_ver_str, preview=preview)
spec_path = os.path.join(repo_root, "package-manager.spec")
update_spec_version(spec_path, new_ver_str, preview=preview)
# Determine a single effective_message to be reused across all
# changelog targets (project, Debian, Fedora).
effective_message: Optional[str] = message
if effective_message is None and isinstance(changelog_message, str):
if changelog_message.strip():
effective_message = changelog_message.strip()
debian_changelog_path = os.path.join(repo_root, "debian", "changelog")
package_name = os.path.basename(repo_root) or "package-manager"
# Debian changelog
update_debian_changelog(
debian_changelog_path,
package_name=package_name,
new_version=new_ver_str,
message=effective_message,
preview=preview,
)
# Fedora / RPM %changelog
update_spec_changelog(
spec_path=spec_path,
package_name=package_name,
new_version=new_ver_str,
message=effective_message,
preview=preview,
)
commit_msg = f"Release version {new_ver_str}"
tag_msg = effective_message or commit_msg
# Determine branch and ensure it is up to date if main/master
try:
branch = get_current_branch() or "main"
except GitError:
branch = "main"
print(f"Releasing on branch: {branch}")
# Ensure main/master are up-to-date from origin before creating and
# pushing tags. For other branches we only log the intent.
sync_branch_with_remote(branch, preview=preview)
files_to_add = [
pyproject_path,
changelog_path,
flake_path,
pkgbuild_path,
spec_path,
debian_changelog_path,
]
existing_files = [p for p in files_to_add if p and os.path.exists(p)]
if preview:
for path in existing_files:
print(f"[PREVIEW] Would run: git add {path}")
print(f'[PREVIEW] Would run: git commit -am "{commit_msg}"')
print(f'[PREVIEW] Would run: git tag -a {new_tag} -m "{tag_msg}"')
print(f"[PREVIEW] Would run: git push origin {branch}")
print("[PREVIEW] Would run: git push origin --tags")
# Also update the floating 'latest' tag to the new highest SemVer.
update_latest_tag(new_tag, preview=True)
if close and branch not in ("main", "master"):
print(
f"[PREVIEW] Would also close branch {branch} after the release "
"(close=True and branch is not main/master)."
)
elif close:
print(
f"[PREVIEW] close=True but current branch is {branch}; "
"no branch would be closed."
)
print("Preview completed. No changes were made.")
return
for path in existing_files:
run_git_command(f"git add {path}")
run_git_command(f'git commit -am "{commit_msg}"')
run_git_command(f'git tag -a {new_tag} -m "{tag_msg}"')
run_git_command(f"git push origin {branch}")
run_git_command("git push origin --tags")
# Move 'latest' to the new release tag so the newest SemVer is always
# marked as latest. This is best-effort and must not break the release.
try:
update_latest_tag(new_tag, preview=False)
except GitError as exc: # pragma: no cover
print(
f"[WARN] Failed to update floating 'latest' tag for {new_tag}: {exc}\n"
"[WARN] The release itself completed successfully; only the "
"'latest' tag was not updated."
)
print(f"Release {new_ver_str} completed.")
if close:
if branch in ("main", "master"):
print(
f"[INFO] close=True but current branch is {branch}; "
"nothing to close."
)
return
print(
f"[INFO] Closing branch {branch} after successful release "
"(close=True and branch is not main/master)..."
)
try:
close_branch(name=branch, base_branch="main", cwd=".")
except Exception as exc: # pragma: no cover
print(f"[WARN] Failed to close branch {branch} automatically: {exc}")
# ---------------------------------------------------------------------------
# Public release entry point
# ---------------------------------------------------------------------------
def release(
pyproject_path: str = "pyproject.toml",
changelog_path: str = "CHANGELOG.md",
release_type: str = "patch",
message: Optional[str] = None,
preview: bool = False,
force: bool = False,
close: bool = False,
) -> None:
"""
High-level release entry point.
Modes:
- preview=True:
* Single-phase PREVIEW only.
- preview=False, force=True:
* Single-phase REAL release, no interactive preview.
- preview=False, force=False:
* Two-phase flow (intended default for interactive CLI use).
"""
if preview:
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=True,
close=close,
)
return
if force:
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=False,
close=close,
)
return
if not sys.stdin.isatty():
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=False,
close=close,
)
return
print("[INFO] Running preview before actual release...\n")
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=True,
close=close,
)
try:
answer = input("Proceed with the actual release? [y/N]: ").strip().lower()
except (EOFError, KeyboardInterrupt):
print("\n[INFO] Release aborted (no confirmation).")
return
if answer not in ("y", "yes"):
print("Release aborted by user. No changes were made.")
return
print("\n[INFO] Running REAL release...\n")
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=False,
close=close,
)
__all__ = ["release"]

View File

@@ -0,0 +1,537 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
File and metadata update helpers for the release workflow.
Responsibilities:
- Update pyproject.toml with the new version.
- Update flake.nix, PKGBUILD, RPM spec files where present.
- Prepend release entries to CHANGELOG.md.
- Maintain distribution-specific changelog files:
* debian/changelog
* RPM spec %changelog section
including maintainer metadata where applicable.
"""
from __future__ import annotations
import os
import re
import subprocess
import sys
import tempfile
from datetime import date, datetime
from typing import Optional, Tuple
# ---------------------------------------------------------------------------
# Editor helper for interactive changelog messages
# ---------------------------------------------------------------------------
def _open_editor_for_changelog(initial_message: Optional[str] = None) -> str:
"""
Open $EDITOR (fallback 'nano') so the user can enter a changelog message.
The temporary file is pre-filled with commented instructions and an
optional initial_message. Lines starting with '#' are ignored when the
message is read back.
Returns the final message (may be empty string if user leaves it blank).
"""
editor = os.environ.get("EDITOR", "nano")
with tempfile.NamedTemporaryFile(
mode="w+",
delete=False,
encoding="utf-8",
) as tmp:
tmp_path = tmp.name
tmp.write(
"# Write the changelog entry for this release.\n"
"# Lines starting with '#' will be ignored.\n"
"# Empty result will fall back to a generic message.\n\n"
)
if initial_message:
tmp.write(initial_message.strip() + "\n")
tmp.flush()
try:
subprocess.call([editor, tmp_path])
except FileNotFoundError:
print(
f"[WARN] Editor {editor!r} not found; proceeding without "
"interactive changelog message."
)
try:
with open(tmp_path, "r", encoding="utf-8") as f:
content = f.read()
finally:
try:
os.remove(tmp_path)
except OSError:
pass
lines = [
line for line in content.splitlines()
if not line.strip().startswith("#")
]
return "\n".join(lines).strip()
# ---------------------------------------------------------------------------
# File update helpers (pyproject + extra packaging + changelog)
# ---------------------------------------------------------------------------
def update_pyproject_version(
pyproject_path: str,
new_version: str,
preview: bool = False,
) -> None:
"""
Update the version in pyproject.toml with the new version.
The function looks for a line matching:
version = "X.Y.Z"
and replaces the version part with the given new_version string.
If the file does not exist, it is skipped without failing the release.
"""
if not os.path.exists(pyproject_path):
print(
f"[INFO] pyproject.toml not found at: {pyproject_path}, "
"skipping version update."
)
return
try:
with open(pyproject_path, "r", encoding="utf-8") as f:
content = f.read()
except OSError as exc:
print(
f"[WARN] Could not read pyproject.toml at {pyproject_path}: {exc}. "
"Skipping version update."
)
return
pattern = r'^(version\s*=\s*")([^"]+)(")'
new_content, count = re.subn(
pattern,
lambda m: f'{m.group(1)}{new_version}{m.group(3)}',
content,
flags=re.MULTILINE,
)
if count == 0:
print("[ERROR] Could not find version line in pyproject.toml")
sys.exit(1)
if preview:
print(f"[PREVIEW] Would update pyproject.toml version to {new_version}")
return
with open(pyproject_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(f"Updated pyproject.toml version to {new_version}")
def update_flake_version(
flake_path: str,
new_version: str,
preview: bool = False,
) -> None:
"""
Update the version in flake.nix, if present.
"""
if not os.path.exists(flake_path):
print("[INFO] flake.nix not found, skipping.")
return
try:
with open(flake_path, "r", encoding="utf-8") as f:
content = f.read()
except Exception as exc:
print(f"[WARN] Could not read flake.nix: {exc}")
return
pattern = r'(version\s*=\s*")([^"]+)(")'
new_content, count = re.subn(
pattern,
lambda m: f'{m.group(1)}{new_version}{m.group(3)}',
content,
)
if count == 0:
print("[WARN] No version assignment found in flake.nix, skipping.")
return
if preview:
print(f"[PREVIEW] Would update flake.nix version to {new_version}")
return
with open(flake_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(f"Updated flake.nix version to {new_version}")
def update_pkgbuild_version(
pkgbuild_path: str,
new_version: str,
preview: bool = False,
) -> None:
"""
Update the version in PKGBUILD, if present.
Expects:
pkgver=1.2.3
pkgrel=1
"""
if not os.path.exists(pkgbuild_path):
print("[INFO] PKGBUILD not found, skipping.")
return
try:
with open(pkgbuild_path, "r", encoding="utf-8") as f:
content = f.read()
except Exception as exc:
print(f"[WARN] Could not read PKGBUILD: {exc}")
return
ver_pattern = r"^(pkgver\s*=\s*)(.+)$"
new_content, ver_count = re.subn(
ver_pattern,
lambda m: f"{m.group(1)}{new_version}",
content,
flags=re.MULTILINE,
)
if ver_count == 0:
print("[WARN] No pkgver line found in PKGBUILD.")
new_content = content
rel_pattern = r"^(pkgrel\s*=\s*)(.+)$"
new_content, rel_count = re.subn(
rel_pattern,
lambda m: f"{m.group(1)}1",
new_content,
flags=re.MULTILINE,
)
if rel_count == 0:
print("[WARN] No pkgrel line found in PKGBUILD.")
if preview:
print(f"[PREVIEW] Would update PKGBUILD to pkgver={new_version}, pkgrel=1")
return
with open(pkgbuild_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(f"Updated PKGBUILD to pkgver={new_version}, pkgrel=1")
def update_spec_version(
spec_path: str,
new_version: str,
preview: bool = False,
) -> None:
"""
Update the version in an RPM spec file, if present.
"""
if not os.path.exists(spec_path):
print("[INFO] RPM spec file not found, skipping.")
return
try:
with open(spec_path, "r", encoding="utf-8") as f:
content = f.read()
except Exception as exc:
print(f"[WARN] Could not read spec file: {exc}")
return
ver_pattern = r"^(Version:\s*)(.+)$"
new_content, ver_count = re.subn(
ver_pattern,
lambda m: f"{m.group(1)}{new_version}",
content,
flags=re.MULTILINE,
)
if ver_count == 0:
print("[WARN] No 'Version:' line found in spec file.")
rel_pattern = r"^(Release:\s*)(.+)$"
def _release_repl(m: re.Match[str]) -> str: # type: ignore[name-defined]
rest = m.group(2).strip()
match = re.match(r"^(\d+)(.*)$", rest)
if match:
suffix = match.group(2)
else:
suffix = ""
return f"{m.group(1)}1{suffix}"
new_content, rel_count = re.subn(
rel_pattern,
_release_repl,
new_content,
flags=re.MULTILINE,
)
if rel_count == 0:
print("[WARN] No 'Release:' line found in spec file.")
if preview:
print(
f"[PREVIEW] Would update spec file "
f"{os.path.basename(spec_path)} to Version: {new_version}, Release: 1..."
)
return
with open(spec_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(
f"Updated spec file {os.path.basename(spec_path)} "
f"to Version: {new_version}, Release: 1..."
)
def update_changelog(
changelog_path: str,
new_version: str,
message: Optional[str] = None,
preview: bool = False,
) -> str:
"""
Prepend a new release section to CHANGELOG.md with the new version,
current date, and a message.
"""
today = date.today().isoformat()
if message is None:
if preview:
message = "Automated release."
else:
print(
"\n[INFO] No release message provided, opening editor for "
"changelog entry...\n"
)
editor_message = _open_editor_for_changelog()
if not editor_message:
message = "Automated release."
else:
message = editor_message
header = f"## [{new_version}] - {today}\n"
header += f"\n* {message}\n\n"
if os.path.exists(changelog_path):
try:
with open(changelog_path, "r", encoding="utf-8") as f:
changelog = f.read()
except Exception as exc:
print(f"[WARN] Could not read existing CHANGELOG.md: {exc}")
changelog = ""
else:
changelog = ""
new_changelog = header + "\n" + changelog if changelog else header
print("\n================ CHANGELOG ENTRY ================")
print(header.rstrip())
print("=================================================\n")
if preview:
print(f"[PREVIEW] Would prepend new entry for {new_version} to CHANGELOG.md")
return message
with open(changelog_path, "w", encoding="utf-8") as f:
f.write(new_changelog)
print(f"Updated CHANGELOG.md with version {new_version}")
return message
# ---------------------------------------------------------------------------
# Debian changelog helpers (with Git config fallback for maintainer)
# ---------------------------------------------------------------------------
def _get_git_config_value(key: str) -> Optional[str]:
"""
Try to read a value from `git config --get <key>`.
"""
try:
result = subprocess.run(
["git", "config", "--get", key],
capture_output=True,
text=True,
check=False,
)
except Exception:
return None
value = result.stdout.strip()
return value or None
def _get_debian_author() -> Tuple[str, str]:
"""
Determine the maintainer name/email for debian/changelog entries.
"""
name = os.environ.get("DEBFULLNAME")
email = os.environ.get("DEBEMAIL")
if not name:
name = os.environ.get("GIT_AUTHOR_NAME")
if not email:
email = os.environ.get("GIT_AUTHOR_EMAIL")
if not name:
name = _get_git_config_value("user.name")
if not email:
email = _get_git_config_value("user.email")
if not name:
name = "Unknown Maintainer"
if not email:
email = "unknown@example.com"
return name, email
def update_debian_changelog(
debian_changelog_path: str,
package_name: str,
new_version: str,
message: Optional[str] = None,
preview: bool = False,
) -> None:
"""
Prepend a new entry to debian/changelog, if it exists.
"""
if not os.path.exists(debian_changelog_path):
print("[INFO] debian/changelog not found, skipping.")
return
debian_version = f"{new_version}-1"
now = datetime.now().astimezone()
date_str = now.strftime("%a, %d %b %Y %H:%M:%S %z")
author_name, author_email = _get_debian_author()
first_line = f"{package_name} ({debian_version}) unstable; urgency=medium"
body_line = message.strip() if message else f"Automated release {new_version}."
stanza = (
f"{first_line}\n\n"
f" * {body_line}\n\n"
f" -- {author_name} <{author_email}> {date_str}\n\n"
)
if preview:
print(
"[PREVIEW] Would prepend the following stanza to debian/changelog:\n"
f"{stanza}"
)
return
try:
with open(debian_changelog_path, "r", encoding="utf-8") as f:
existing = f.read()
except Exception as exc:
print(f"[WARN] Could not read debian/changelog: {exc}")
existing = ""
new_content = stanza + existing
with open(debian_changelog_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(f"Updated debian/changelog with version {debian_version}")
# ---------------------------------------------------------------------------
# Fedora / RPM spec %changelog helper
# ---------------------------------------------------------------------------
def update_spec_changelog(
spec_path: str,
package_name: str,
new_version: str,
message: Optional[str] = None,
preview: bool = False,
) -> None:
"""
Prepend a new entry to the %changelog section of an RPM spec file,
if present.
Typical RPM-style entry:
* Tue Dec 09 2025 John Doe <john@example.com> - 0.5.1-1
- Your changelog message
"""
if not os.path.exists(spec_path):
print("[INFO] RPM spec file not found, skipping spec changelog update.")
return
try:
with open(spec_path, "r", encoding="utf-8") as f:
content = f.read()
except Exception as exc:
print(f"[WARN] Could not read spec file for changelog update: {exc}")
return
debian_version = f"{new_version}-1"
now = datetime.now().astimezone()
date_str = now.strftime("%a %b %d %Y")
# Reuse Debian maintainer discovery for author name/email.
author_name, author_email = _get_debian_author()
body_line = message.strip() if message else f"Automated release {new_version}."
stanza = (
f"* {date_str} {author_name} <{author_email}> - {debian_version}\n"
f"- {body_line}\n\n"
)
marker = "%changelog"
idx = content.find(marker)
if idx == -1:
# No %changelog section yet: append one at the end.
new_content = content.rstrip() + "\n\n%changelog\n" + stanza
else:
# Insert stanza right after the %changelog line.
before = content[: idx + len(marker)]
after = content[idx + len(marker) :]
new_content = before + "\n" + stanza + after.lstrip("\n")
if preview:
print(
"[PREVIEW] Would update RPM %changelog section with the following "
"stanza:\n"
f"{stanza}"
)
return
try:
with open(spec_path, "w", encoding="utf-8") as f:
f.write(new_content)
except Exception as exc:
print(f"[WARN] Failed to write updated spec changelog section: {exc}")
return
print(
f"Updated RPM %changelog section in {os.path.basename(spec_path)} "
f"for {package_name} {debian_version}"
)

View File

@@ -0,0 +1,95 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Git-related helpers for the release workflow.
Responsibilities:
- Run Git (or shell) commands with basic error reporting.
- Ensure main/master are synchronized with origin before tagging.
- Maintain the floating 'latest' tag that always points to the newest
release tag.
"""
from __future__ import annotations
import subprocess
from pkgmgr.core.git import GitError
def run_git_command(cmd: str) -> None:
"""
Run a Git (or shell) command with basic error reporting.
The command is executed via the shell, primarily for readability
when printed (as in 'git commit -am "msg"').
"""
print(f"[GIT] {cmd}")
try:
subprocess.run(cmd, shell=True, check=True)
except subprocess.CalledProcessError as exc:
print(f"[ERROR] Git command failed: {cmd}")
print(f" Exit code: {exc.returncode}")
if exc.stdout:
print("--- stdout ---")
print(exc.stdout)
if exc.stderr:
print("--- stderr ---")
print(exc.stderr)
raise GitError(f"Git command failed: {cmd}") from exc
def sync_branch_with_remote(branch: str, preview: bool = False) -> None:
"""
Ensure the local main/master branch is up-to-date before tagging.
Behaviour:
- For main/master: run 'git fetch origin' and 'git pull origin <branch>'.
- For all other branches: only log that no automatic sync is performed.
"""
if branch not in ("main", "master"):
print(
f"[INFO] Skipping automatic git pull for non-main/master branch "
f"{branch}."
)
return
print(
f"[INFO] Updating branch {branch} from origin before creating tags..."
)
if preview:
print("[PREVIEW] Would run: git fetch origin")
print(f"[PREVIEW] Would run: git pull origin {branch}")
return
run_git_command("git fetch origin")
run_git_command(f"git pull origin {branch}")
def update_latest_tag(new_tag: str, preview: bool = False) -> None:
"""
Move the floating 'latest' tag to the newly created release tag.
Implementation details:
- We explicitly dereference the tag object via `<tag>^{}` so that
'latest' always points at the underlying commit, not at another tag.
- We create/update 'latest' as an annotated tag with a short message so
Git configurations that enforce annotated/signed tags do not fail
with "no tag message".
"""
target_ref = f"{new_tag}^{{}}"
print(f"[INFO] Updating 'latest' tag to point at {new_tag} (commit {target_ref})...")
if preview:
print(f"[PREVIEW] Would run: git tag -f -a latest {target_ref} "
f'-m "Floating latest tag for {new_tag}"')
print("[PREVIEW] Would run: git push origin latest --force")
return
run_git_command(
f'git tag -f -a latest {target_ref} '
f'-m "Floating latest tag for {new_tag}"'
)
run_git_command("git push origin latest --force")

View File

@@ -0,0 +1,53 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Version discovery and bumping helpers for the release workflow.
"""
from __future__ import annotations
from pkgmgr.core.git import get_tags
from pkgmgr.core.version.semver import (
SemVer,
find_latest_version,
bump_major,
bump_minor,
bump_patch,
)
def determine_current_version() -> SemVer:
"""
Determine the current semantic version from Git tags.
Behaviour:
- If there are no tags or no SemVer-compatible tags, return 0.0.0.
- Otherwise, use the latest SemVer tag as current version.
"""
tags = get_tags()
if not tags:
return SemVer(0, 0, 0)
latest = find_latest_version(tags)
if latest is None:
return SemVer(0, 0, 0)
_tag, ver = latest
return ver
def bump_semver(current: SemVer, release_type: str) -> SemVer:
"""
Bump the given SemVer according to the release type.
release_type must be one of: "major", "minor", "patch".
"""
if release_type == "major":
return bump_major(current)
if release_type == "minor":
return bump_minor(current)
if release_type == "patch":
return bump_patch(current)
raise ValueError(f"Unknown release type: {release_type!r}")

View File

View File

@@ -1,8 +1,8 @@
import subprocess
import os
from pkgmgr.get_repo_dir import get_repo_dir
from pkgmgr.get_repo_identifier import get_repo_identifier
from pkgmgr.verify import verify_repository
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.repository.verify import verify_repository
def clone_repos(
selected_repos,

View File

@@ -2,8 +2,8 @@ import os
import subprocess
import sys
import yaml
from pkgmgr.generate_alias import generate_alias
from pkgmgr.save_user_config import save_user_config
from pkgmgr.core.command.alias import generate_alias
from pkgmgr.core.config.save import save_user_config
def create_repo(identifier, config_merged, user_config_path, bin_dir, remote=False, preview=False):
"""

View File

@@ -1,7 +1,7 @@
import os
import sys
from pkgmgr.get_repo_identifier import get_repo_identifier
from pkgmgr.get_repo_dir import get_repo_dir
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.repository.dir import get_repo_dir
def deinstall_repos(selected_repos, repositories_base_dir, bin_dir, all_repos, preview=False):
for repo in selected_repos:

View File

@@ -1,7 +1,7 @@
import shutil
import os
from pkgmgr.get_repo_identifier import get_repo_identifier
from pkgmgr.get_repo_dir import get_repo_dir
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.repository.dir import get_repo_dir
def delete_repos(selected_repos, repositories_base_dir, all_repos, preview=False):
for repo in selected_repos:

View File

@@ -0,0 +1,352 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Pretty-print repository list with status, categories, tags and path.
- Tags come exclusively from YAML: repo["tags"].
- Categories come from repo["category_files"] (YAML file names without
.yml/.yaml) and optional repo["category"].
- Optional detail mode (--description) prints an extended section per
repository with description, homepage, etc.
"""
from __future__ import annotations
import os
import re
from textwrap import wrap
from typing import Any, Dict, List, Optional
Repository = Dict[str, Any]
RESET = "\033[0m"
BOLD = "\033[1m"
DIM = "\033[2m"
GREEN = "\033[32m"
YELLOW = "\033[33m"
RED = "\033[31m"
MAGENTA = "\033[35m"
GREY = "\033[90m"
def _compile_maybe_regex(pattern: str) -> Optional[re.Pattern[str]]:
"""
If pattern is of the form /.../, return a compiled regex (case-insensitive).
Otherwise return None.
"""
if not pattern:
return None
if len(pattern) >= 2 and pattern.startswith("/") and pattern.endswith("/"):
try:
return re.compile(pattern[1:-1], re.IGNORECASE)
except re.error:
return None
return None
def _status_matches(status: str, status_filter: str) -> bool:
"""
Match a status string against an optional filter (substring or /regex/).
"""
if not status_filter:
return True
regex = _compile_maybe_regex(status_filter)
if regex:
return bool(regex.search(status))
return status_filter.lower() in status.lower()
def _compute_repo_dir(repositories_base_dir: str, repo: Repository) -> str:
"""
Compute the local directory for a repository.
If the repository already has a 'directory' key, that is used;
otherwise the path is constructed from provider/account/repository
under repositories_base_dir.
"""
if repo.get("directory"):
return os.path.expanduser(str(repo["directory"]))
provider = str(repo.get("provider", ""))
account = str(repo.get("account", ""))
repository = str(repo.get("repository", ""))
return os.path.join(
os.path.expanduser(repositories_base_dir),
provider,
account,
repository,
)
def _compute_status(
repo: Repository,
repo_dir: str,
binaries_dir: str,
) -> str:
"""
Compute a human-readable status string, e.g. 'present,alias,ignored'.
"""
parts: List[str] = []
exists = os.path.isdir(repo_dir)
if exists:
parts.append("present")
else:
parts.append("absent")
alias = repo.get("alias")
if alias:
alias_path = os.path.join(os.path.expanduser(binaries_dir), str(alias))
if os.path.exists(alias_path):
parts.append("alias")
else:
parts.append("alias-missing")
if repo.get("ignore"):
parts.append("ignored")
return ",".join(parts) if parts else "-"
def _color_status(status_padded: str) -> str:
"""
Color individual status flags inside a padded status string.
Input is expected to be right-padded to the column width.
Color mapping:
- present -> green
- absent -> red
- alias -> red
- alias-missing -> red
- ignored -> magenta
- other -> default
"""
core = status_padded.rstrip()
pad_spaces = len(status_padded) - len(core)
plain_parts = core.split(",") if core else []
colored_parts: List[str] = []
for raw_part in plain_parts:
name = raw_part.strip()
if not name:
continue
if name == "present":
color = GREEN
elif name == "absent":
color = MAGENTA
elif name in ("alias", "alias-missing"):
color = YELLOW
elif name == "ignored":
color = MAGENTA
else:
color = ""
if color:
colored_parts.append(f"{color}{name}{RESET}")
else:
colored_parts.append(name)
colored_core = ",".join(colored_parts)
return colored_core + (" " * pad_spaces)
def list_repositories(
repositories: List[Repository],
repositories_base_dir: str,
binaries_dir: str,
search_filter: str = "",
status_filter: str = "",
extra_tags: Optional[List[str]] = None,
show_description: bool = False,
) -> None:
"""
Print a table of repositories and (optionally) detailed descriptions.
Parameters
----------
repositories:
Repositories to show (usually already filtered by get_selected_repos).
repositories_base_dir:
Base directory where repositories live.
binaries_dir:
Directory where alias symlinks live.
search_filter:
Optional substring/regex filter on identifier and metadata.
status_filter:
Optional filter on computed status.
extra_tags:
Additional tags to show for each repository (CLI overlay only).
show_description:
If True, print a detailed block for each repository after the table.
"""
if extra_tags is None:
extra_tags = []
search_regex = _compile_maybe_regex(search_filter)
rows: List[Dict[str, Any]] = []
# ------------------------------------------------------------------
# Build rows
# ------------------------------------------------------------------
for repo in repositories:
identifier = str(repo.get("repository") or repo.get("alias") or "")
alias = str(repo.get("alias") or "")
provider = str(repo.get("provider") or "")
account = str(repo.get("account") or "")
description = str(repo.get("description") or "")
homepage = str(repo.get("homepage") or "")
repo_dir = _compute_repo_dir(repositories_base_dir, repo)
status = _compute_status(repo, repo_dir, binaries_dir)
if not _status_matches(status, status_filter):
continue
if search_filter:
haystack = " ".join(
[
identifier,
alias,
provider,
account,
description,
homepage,
repo_dir,
]
)
if search_regex:
if not search_regex.search(haystack):
continue
else:
if search_filter.lower() not in haystack.lower():
continue
categories: List[str] = []
categories.extend(map(str, repo.get("category_files", [])))
if repo.get("category"):
categories.append(str(repo["category"]))
yaml_tags: List[str] = list(map(str, repo.get("tags", [])))
display_tags: List[str] = sorted(
set(yaml_tags + list(map(str, extra_tags)))
)
rows.append(
{
"repo": repo,
"identifier": identifier,
"status": status,
"categories": categories,
"tags": display_tags,
"dir": repo_dir,
}
)
if not rows:
print("No repositories matched the given filters.")
return
# ------------------------------------------------------------------
# Table section (header grey, values white, per-flag colored status)
# ------------------------------------------------------------------
ident_width = max(len("IDENTIFIER"), max(len(r["identifier"]) for r in rows))
status_width = max(len("STATUS"), max(len(r["status"]) for r in rows))
cat_width = max(
len("CATEGORIES"),
max((len(",".join(r["categories"])) for r in rows), default=0),
)
tag_width = max(
len("TAGS"),
max((len(",".join(r["tags"])) for r in rows), default=0),
)
header = (
f"{GREY}{BOLD}"
f"{'IDENTIFIER'.ljust(ident_width)} "
f"{'STATUS'.ljust(status_width)} "
f"{'CATEGORIES'.ljust(cat_width)} "
f"{'TAGS'.ljust(tag_width)} "
f"DIR"
f"{RESET}"
)
print(header)
print("-" * (ident_width + status_width + cat_width + tag_width + 10 + 40))
for r in rows:
ident_col = r["identifier"].ljust(ident_width)
cat_col = ",".join(r["categories"]).ljust(cat_width)
tag_col = ",".join(r["tags"]).ljust(tag_width)
dir_col = r["dir"]
status = r["status"]
status_padded = status.ljust(status_width)
status_colored = _color_status(status_padded)
print(
f"{ident_col} "
f"{status_colored} "
f"{cat_col} "
f"{tag_col} "
f"{dir_col}"
)
# ------------------------------------------------------------------
# Detailed section (alias value red, same status coloring)
# ------------------------------------------------------------------
if not show_description:
return
print()
print(f"{BOLD}Detailed repository information:{RESET}")
print()
for r in rows:
repo = r["repo"]
identifier = r["identifier"]
alias = str(repo.get("alias") or "")
provider = str(repo.get("provider") or "")
account = str(repo.get("account") or "")
repository = str(repo.get("repository") or "")
description = str(repo.get("description") or "")
homepage = str(repo.get("homepage") or "")
categories = r["categories"]
tags = r["tags"]
repo_dir = r["dir"]
status = r["status"]
print(f"{BOLD}{identifier}{RESET}")
print(f" Provider: {provider}")
print(f" Account: {account}")
print(f" Repository: {repository}")
# Alias value highlighted in red
if alias:
print(f" Alias: {RED}{alias}{RESET}")
status_colored = _color_status(status)
print(f" Status: {status_colored}")
if categories:
print(f" Categories: {', '.join(categories)}")
if tags:
print(f" Tags: {', '.join(tags)}")
print(f" Directory: {repo_dir}")
if homepage:
print(f" Homepage: {homepage}")
if description:
print(" Description:")
for line in wrap(description, width=78):
print(f" {line}")
print()

View File

@@ -0,0 +1,77 @@
import os
import subprocess
import sys
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.repository.verify import verify_repository
def pull_with_verification(
selected_repos,
repositories_base_dir,
all_repos,
extra_args,
no_verification,
preview: bool,
) -> None:
"""
Execute `git pull` for each repository with verification.
- Uses verify_repository() in "pull" mode.
- If verification fails (and verification info is set) and
--no-verification is not enabled, the user is prompted to confirm
the pull.
- In preview mode, no interactive prompts are performed and no
Git commands are executed; only the would-be command is printed.
"""
for repo in selected_repos:
repo_identifier = get_repo_identifier(repo, all_repos)
repo_dir = get_repo_dir(repositories_base_dir, repo)
if not os.path.exists(repo_dir):
print(f"Repository directory '{repo_dir}' not found for {repo_identifier}.")
continue
verified_info = repo.get("verified")
verified_ok, errors, commit_hash, signing_key = verify_repository(
repo,
repo_dir,
mode="pull",
no_verification=no_verification,
)
# Only prompt the user if:
# - we are NOT in preview mode
# - verification is enabled
# - the repo has verification info configured
# - verification failed
if (
not preview
and not no_verification
and verified_info
and not verified_ok
):
print(f"Warning: Verification failed for {repo_identifier}:")
for err in errors:
print(f" - {err}")
choice = input("Proceed with 'git pull'? (y/N): ").strip().lower()
if choice != "y":
continue
# Build the git pull command (include extra args if present)
args_part = " ".join(extra_args) if extra_args else ""
full_cmd = f"git pull{(' ' + args_part) if args_part else ''}"
if preview:
# Preview mode: only show the command, do not execute or prompt.
print(f"[Preview] In '{repo_dir}': {full_cmd}")
else:
print(f"Running in '{repo_dir}': {full_cmd}")
result = subprocess.run(full_cmd, cwd=repo_dir, shell=True)
if result.returncode != 0:
print(
f"'git pull' for {repo_identifier} failed "
f"with exit code {result.returncode}."
)
sys.exit(result.returncode)

View File

@@ -1,9 +1,9 @@
import sys
import shutil
from .exec_proxy_command import exec_proxy_command
from .run_command import run_command
from .get_repo_identifier import get_repo_identifier
from pkgmgr.actions.proxy import exec_proxy_command
from pkgmgr.core.command.run import run_command
from pkgmgr.core.repository.identifier import get_repo_identifier
def status_repos(

View File

@@ -1,8 +1,8 @@
import sys
import shutil
from pkgmgr.pull_with_verification import pull_with_verification
from pkgmgr.install_repos import install_repos
from pkgmgr.actions.repository.pull import pull_with_verification
from pkgmgr.actions.install import install_repos
def update_repos(
@@ -54,7 +54,7 @@ def update_repos(
)
if system_update:
from pkgmgr.run_command import run_command
from pkgmgr.core.command.run import run_command
# Nix: upgrade all profile entries (if Nix is available)
if shutil.which("nix") is not None:

44
pkgmgr/cli.py → pkgmgr/cli/__init__.py Executable file → Normal file
View File

@@ -1,17 +1,20 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import os
import sys
from pkgmgr.load_config import load_config
from pkgmgr.cli_core import CLIContext, create_parser, dispatch_command
from pkgmgr.core.config.load import load_config
# Define configuration file paths.
PROJECT_ROOT = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
USER_CONFIG_PATH = os.path.join(PROJECT_ROOT, "config", "config.yaml")
from .context import CLIContext
from .parser import create_parser
from .dispatch import dispatch_command
__all__ = ["CLIContext", "create_parser", "dispatch_command", "main"]
# User config lives in the home directory:
# ~/.config/pkgmgr/config.yaml
USER_CONFIG_PATH = os.path.expanduser("~/.config/pkgmgr/config.yaml")
DESCRIPTION_TEXT = """\
\033[1;32mPackage Manager 🤖📦\033[0m
@@ -31,7 +34,6 @@ dependency formats, including:
\033[1;33mNix:\033[0m flake.nix
\033[1;33mArch Linux:\033[0m PKGBUILD
\033[1;33mAnsible:\033[0m requirements.yml
\033[1;33mpkgmgr-native:\033[0m pkgmgr.yml
This allows pkgmgr to perform installation, updates, verification, dependency
resolution, and synchronization across complex multi-repo environments with a
@@ -63,20 +65,31 @@ For detailed help on each command, use:
def main() -> None:
# Load merged configuration
"""
Entry point for the pkgmgr CLI.
"""
config_merged = load_config(USER_CONFIG_PATH)
repositories_base_dir = os.path.expanduser(
config_merged["directories"]["repositories"]
# Directories: be robust and provide sane defaults if missing
directories = config_merged.get("directories") or {}
repositories_dir = os.path.expanduser(
directories.get("repositories", "~/Repositories")
)
binaries_dir = os.path.expanduser(
config_merged["directories"]["binaries"]
directories.get("binaries", "~/.local/bin")
)
all_repositories = config_merged["repositories"]
# Ensure the merged config actually contains the resolved directories
config_merged.setdefault("directories", {})
config_merged["directories"]["repositories"] = repositories_dir
config_merged["directories"]["binaries"] = binaries_dir
all_repositories = config_merged.get("repositories", [])
ctx = CLIContext(
config_merged=config_merged,
repositories_base_dir=repositories_base_dir,
repositories_base_dir=repositories_dir,
all_repositories=all_repositories,
binaries_dir=binaries_dir,
user_config_path=USER_CONFIG_PATH,
@@ -85,7 +98,6 @@ def main() -> None:
parser = create_parser(DESCRIPTION_TEXT)
args = parser.parse_args()
# If no subcommand is provided, show help
if not getattr(args, "command", None):
parser.print_help()
return

View File

@@ -2,8 +2,8 @@ from __future__ import annotations
import sys
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.branch_commands import open_branch
from pkgmgr.cli.context import CLIContext
from pkgmgr.actions.branch import open_branch, close_branch
def handle_branch(args, ctx: CLIContext) -> None:
@@ -11,7 +11,8 @@ def handle_branch(args, ctx: CLIContext) -> None:
Handle `pkgmgr branch` subcommands.
Currently supported:
- pkgmgr branch open [<name>] [--base <branch>]
- pkgmgr branch open [<name>] [--base <branch>]
- pkgmgr branch close [<name>] [--base <branch>]
"""
if args.subcommand == "open":
open_branch(
@@ -21,5 +22,13 @@ def handle_branch(args, ctx: CLIContext) -> None:
)
return
if args.subcommand == "close":
close_branch(
name=getattr(args, "name", None),
base_branch=getattr(args, "base", "main"),
cwd=".",
)
return
print(f"Unknown branch subcommand: {args.subcommand}")
sys.exit(2)

View File

@@ -4,12 +4,12 @@ import os
import sys
from typing import Any, Dict, List, Optional, Tuple
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.get_repo_dir import get_repo_dir
from pkgmgr.get_repo_identifier import get_repo_identifier
from pkgmgr.git_utils import get_tags
from pkgmgr.versioning import SemVer, extract_semver_from_tags
from pkgmgr.changelog import generate_changelog
from pkgmgr.cli.context import CLIContext
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.git import get_tags
from pkgmgr.core.version.semver import SemVer, extract_semver_from_tags
from pkgmgr.actions.changelog import generate_changelog
Repository = Dict[str, Any]

View File

@@ -0,0 +1,240 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import os
import sys
import shutil
from pathlib import Path
from typing import Any, Dict
import yaml
from pkgmgr.cli.context import CLIContext
from pkgmgr.actions.config.init import config_init
from pkgmgr.actions.config.add import interactive_add
from pkgmgr.core.repository.resolve import resolve_repos
from pkgmgr.core.config.save import save_user_config
from pkgmgr.actions.config.show import show_config
from pkgmgr.core.command.run import run_command
def _load_user_config(user_config_path: str) -> Dict[str, Any]:
"""
Load the user config from ~/.config/pkgmgr/config.yaml
(or whatever ctx.user_config_path is), creating the directory if needed.
"""
user_config_path_expanded = os.path.expanduser(user_config_path)
cfg_dir = os.path.dirname(user_config_path_expanded)
if cfg_dir and not os.path.isdir(cfg_dir):
os.makedirs(cfg_dir, exist_ok=True)
if os.path.exists(user_config_path_expanded):
with open(user_config_path_expanded, "r", encoding="utf-8") as f:
return yaml.safe_load(f) or {"repositories": []}
return {"repositories": []}
def _find_defaults_source_dir() -> str | None:
"""
Find the directory inside the installed pkgmgr package OR the
project root that contains default config files.
Preferred locations (in dieser Reihenfolge):
- <pkg_root>/config_defaults
- <pkg_root>/config
- <project_root>/config_defaults
- <project_root>/config
"""
import pkgmgr # local import to avoid circular deps
pkg_root = Path(pkgmgr.__file__).resolve().parent
project_root = pkg_root.parent
candidates = [
pkg_root / "config_defaults",
pkg_root / "config",
project_root / "config_defaults",
project_root / "config",
]
for cand in candidates:
if cand.is_dir():
return str(cand)
return None
def _update_default_configs(user_config_path: str) -> None:
"""
Copy all default *.yml/*.yaml files from the installed pkgmgr package
into ~/.config/pkgmgr/, overwriting existing ones except the user
config file itself (config.yaml), which is never touched.
"""
source_dir = _find_defaults_source_dir()
if not source_dir:
print(
"[WARN] No config_defaults or config directory found in "
"pkgmgr installation. Nothing to update."
)
return
dest_dir = os.path.dirname(os.path.expanduser(user_config_path))
if not dest_dir:
dest_dir = os.path.expanduser("~/.config/pkgmgr")
os.makedirs(dest_dir, exist_ok=True)
for name in os.listdir(source_dir):
lower = name.lower()
if not (lower.endswith(".yml") or lower.endswith(".yaml")):
continue
if name == "config.yaml":
# Never overwrite the user config template / live config
continue
src = os.path.join(source_dir, name)
dst = os.path.join(dest_dir, name)
shutil.copy2(src, dst)
print(f"[INFO] Updated default config file: {dst}")
def handle_config(args, ctx: CLIContext) -> None:
"""
Handle 'pkgmgr config' subcommands.
"""
user_config_path = ctx.user_config_path
# ------------------------------------------------------------
# config show
# ------------------------------------------------------------
if args.subcommand == "show":
if args.all or (not args.identifiers):
# Full merged config view
show_config([], user_config_path, full_config=True)
else:
# Show only matching entries from user config
user_config = _load_user_config(user_config_path)
selected = resolve_repos(
args.identifiers,
user_config.get("repositories", []),
)
if selected:
show_config(
selected,
user_config_path,
full_config=False,
)
return
# ------------------------------------------------------------
# config add
# ------------------------------------------------------------
if args.subcommand == "add":
interactive_add(ctx.config_merged, user_config_path)
return
# ------------------------------------------------------------
# config edit
# ------------------------------------------------------------
if args.subcommand == "edit":
run_command(f"nano {user_config_path}")
return
# ------------------------------------------------------------
# config init
# ------------------------------------------------------------
if args.subcommand == "init":
user_config = _load_user_config(user_config_path)
config_init(
user_config,
ctx.config_merged,
ctx.binaries_dir,
user_config_path,
)
return
# ------------------------------------------------------------
# config delete
# ------------------------------------------------------------
if args.subcommand == "delete":
user_config = _load_user_config(user_config_path)
if args.all or not args.identifiers:
print(
"[ERROR] 'config delete' requires explicit identifiers. "
"Use 'config show' to inspect entries."
)
return
to_delete = resolve_repos(
args.identifiers,
user_config.get("repositories", []),
)
new_repos = [
entry
for entry in user_config.get("repositories", [])
if entry not in to_delete
]
user_config["repositories"] = new_repos
save_user_config(user_config, user_config_path)
print(f"Deleted {len(to_delete)} entries from user config.")
return
# ------------------------------------------------------------
# config ignore
# ------------------------------------------------------------
if args.subcommand == "ignore":
user_config = _load_user_config(user_config_path)
if args.all or not args.identifiers:
print(
"[ERROR] 'config ignore' requires explicit identifiers. "
"Use 'config show' to inspect entries."
)
return
to_modify = resolve_repos(
args.identifiers,
user_config.get("repositories", []),
)
for entry in user_config["repositories"]:
key = (
entry.get("provider"),
entry.get("account"),
entry.get("repository"),
)
for mod in to_modify:
mod_key = (
mod.get("provider"),
mod.get("account"),
mod.get("repository"),
)
if key == mod_key:
entry["ignore"] = args.set == "true"
print(
f"Set ignore for {key} to {entry['ignore']}"
)
save_user_config(user_config, user_config_path)
return
# ------------------------------------------------------------
# config update
# ------------------------------------------------------------
if args.subcommand == "update":
"""
Copy default YAML configs from the installed package into the
user's ~/.config/pkgmgr directory.
This will overwrite files with the same name (except config.yaml).
"""
_update_default_configs(user_config_path)
return
# ------------------------------------------------------------
# Unknown subcommand
# ------------------------------------------------------------
print(f"Unknown config subcommand: {args.subcommand}")
sys.exit(2)

View File

@@ -3,8 +3,8 @@ from __future__ import annotations
import sys
from typing import Any, Dict, List
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.exec_proxy_command import exec_proxy_command
from pkgmgr.cli.context import CLIContext
from pkgmgr.actions.proxy import exec_proxy_command
Repository = Dict[str, Any]

View File

@@ -0,0 +1,98 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Release command wiring for the pkgmgr CLI.
This module implements the `pkgmgr release` subcommand on top of the
generic selection logic from cli.dispatch. It does not define its
own subparser; the CLI surface is configured in cli.parser.
Responsibilities:
- Take the parsed argparse.Namespace for the `release` command.
- Use the list of selected repositories provided by dispatch_command().
- Optionally list affected repositories when --list is set.
- For each selected repository, run pkgmgr.actions.release.release(...) in
the context of that repository directory.
"""
from __future__ import annotations
import os
from typing import Any, Dict, List
from pkgmgr.cli.context import CLIContext
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.actions.release import release as run_release
Repository = Dict[str, Any]
def handle_release(
args,
ctx: CLIContext,
selected: List[Repository],
) -> None:
"""
Handle the `pkgmgr release` subcommand.
Flow:
1) Use the `selected` repositories as computed by dispatch_command().
2) If --list is given, print the identifiers of the selected repos
and return without running any release.
3) For each selected repository:
- Resolve its identifier and local directory.
- Change into that directory.
- Call pkgmgr.actions.release.release(...) with the parsed options.
"""
if not selected:
print("[pkgmgr] No repositories selected for release.")
return
# List-only mode: show which repositories would be affected.
if getattr(args, "list", False):
print("[pkgmgr] Repositories that would be affected by this release:")
for repo in selected:
identifier = get_repo_identifier(repo, ctx.all_repositories)
print(f" - {identifier}")
return
for repo in selected:
identifier = get_repo_identifier(repo, ctx.all_repositories)
repo_dir = repo.get("directory")
if not repo_dir:
try:
repo_dir = get_repo_dir(ctx.repositories_base_dir, repo)
except Exception:
repo_dir = None
if not repo_dir or not os.path.isdir(repo_dir):
print(
f"[WARN] Skipping repository {identifier}: "
"local directory does not exist."
)
continue
print(
f"[pkgmgr] Running release for repository {identifier} "
f"in '{repo_dir}'..."
)
# Change to repo directory and invoke the helper.
cwd_before = os.getcwd()
try:
os.chdir(repo_dir)
run_release(
pyproject_path="pyproject.toml",
changelog_path="CHANGELOG.md",
release_type=args.release_type,
message=args.message or None,
preview=getattr(args, "preview", False),
force=getattr(args, "force", False),
close=getattr(args, "close", False),
)
finally:
os.chdir(cwd_before)

View File

@@ -0,0 +1,214 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import sys
from typing import Any, Dict, List
from pkgmgr.cli.context import CLIContext
from pkgmgr.actions.install import install_repos
from pkgmgr.actions.repository.deinstall import deinstall_repos
from pkgmgr.actions.repository.delete import delete_repos
from pkgmgr.actions.repository.update import update_repos
from pkgmgr.actions.repository.status import status_repos
from pkgmgr.actions.repository.list import list_repositories
from pkgmgr.core.command.run import run_command
from pkgmgr.actions.repository.create import create_repo
from pkgmgr.core.repository.selected import get_selected_repos
from pkgmgr.core.repository.dir import get_repo_dir
Repository = Dict[str, Any]
def _resolve_repository_directory(repository: Repository, ctx: CLIContext) -> str:
"""
Resolve the local filesystem directory for a repository.
Priority:
1. Use repository["directory"] if present.
2. Fallback to get_repo_dir(...) using the repositories base directory
from the CLI context.
"""
repo_dir = repository.get("directory")
if repo_dir:
return repo_dir
base_dir = (
getattr(ctx, "repositories_base_dir", None)
or getattr(ctx, "repositories_dir", None)
)
if not base_dir:
raise RuntimeError(
"Cannot resolve repositories base directory from context; "
"expected ctx.repositories_base_dir or ctx.repositories_dir."
)
return get_repo_dir(base_dir, repository)
def handle_repos_command(
args,
ctx: CLIContext,
selected: List[Repository],
) -> None:
"""
Handle core repository commands (install/update/deinstall/delete/.../list).
"""
# ------------------------------------------------------------
# install
# ------------------------------------------------------------
if args.command == "install":
install_repos(
selected,
ctx.repositories_base_dir,
ctx.binaries_dir,
ctx.all_repositories,
args.no_verification,
args.preview,
args.quiet,
args.clone_mode,
args.dependencies,
)
return
# ------------------------------------------------------------
# update
# ------------------------------------------------------------
if args.command == "update":
update_repos(
selected,
ctx.repositories_base_dir,
ctx.binaries_dir,
ctx.all_repositories,
args.no_verification,
args.system,
args.preview,
args.quiet,
args.dependencies,
args.clone_mode,
)
return
# ------------------------------------------------------------
# deinstall
# ------------------------------------------------------------
if args.command == "deinstall":
deinstall_repos(
selected,
ctx.repositories_base_dir,
ctx.binaries_dir,
ctx.all_repositories,
preview=args.preview,
)
return
# ------------------------------------------------------------
# delete
# ------------------------------------------------------------
if args.command == "delete":
delete_repos(
selected,
ctx.repositories_base_dir,
ctx.all_repositories,
preview=args.preview,
)
return
# ------------------------------------------------------------
# status
# ------------------------------------------------------------
if args.command == "status":
status_repos(
selected,
ctx.repositories_base_dir,
ctx.all_repositories,
args.extra_args,
list_only=args.list,
system_status=args.system,
preview=args.preview,
)
return
# ------------------------------------------------------------
# path
# ------------------------------------------------------------
if args.command == "path":
if not selected:
print("[pkgmgr] No repositories selected for path.")
return
for repository in selected:
try:
repo_dir = _resolve_repository_directory(repository, ctx)
except Exception as exc:
ident = (
f"{repository.get('provider', '?')}/"
f"{repository.get('account', '?')}/"
f"{repository.get('repository', '?')}"
)
print(
f"[WARN] Could not resolve directory for {ident}: {exc}"
)
continue
print(repo_dir)
return
# ------------------------------------------------------------
# shell
# ------------------------------------------------------------
if args.command == "shell":
if not args.shell_command:
print("[ERROR] 'shell' requires a command via -c/--command.")
sys.exit(2)
command_to_run = " ".join(args.shell_command)
for repository in selected:
repo_dir = _resolve_repository_directory(repository, ctx)
print(f"Executing in '{repo_dir}': {command_to_run}")
run_command(
command_to_run,
cwd=repo_dir,
preview=args.preview,
)
return
# ------------------------------------------------------------
# create
# ------------------------------------------------------------
if args.command == "create":
if not args.identifiers:
print(
"[ERROR] 'create' requires at least one identifier "
"in the format provider/account/repository."
)
sys.exit(1)
for identifier in args.identifiers:
create_repo(
identifier,
ctx.config_merged,
ctx.user_config_path,
ctx.binaries_dir,
remote=args.remote,
preview=args.preview,
)
return
# ------------------------------------------------------------
# list
# ------------------------------------------------------------
if args.command == "list":
list_repositories(
selected,
ctx.repositories_base_dir,
ctx.binaries_dir,
status_filter=getattr(args, "status", "") or "",
extra_tags=getattr(args, "tag", []) or [],
show_description=getattr(args, "description", False),
)
return
print(f"[ERROR] Unknown repos command: {args.command}")
sys.exit(2)

View File

@@ -0,0 +1,115 @@
from __future__ import annotations
import json
import os
from typing import Any, Dict, List
from pkgmgr .cli .context import CLIContext
from pkgmgr .core .command .run import run_command
from pkgmgr .core .repository .identifier import get_repo_identifier
from pkgmgr .core .repository .dir import get_repo_dir
Repository = Dict[str, Any]
def _resolve_repository_path(repository: Repository, ctx: CLIContext) -> str:
"""
Resolve the filesystem path for a repository.
Priority:
1. Use explicit keys if present (directory / path / workspace / workspace_dir).
2. Fallback to get_repo_dir(...) using the repositories base directory
from the CLI context.
"""
# 1) Explicit path-like keys on the repository object
for key in ("directory", "path", "workspace", "workspace_dir"):
value = repository.get(key)
if value:
return value
# 2) Fallback: compute from base dir + repository metadata
base_dir = (
getattr(ctx, "repositories_base_dir", None)
or getattr(ctx, "repositories_dir", None)
)
if not base_dir:
raise RuntimeError(
"Cannot resolve repositories base directory from context; "
"expected ctx.repositories_base_dir or ctx.repositories_dir."
)
return get_repo_dir(base_dir, repository)
def handle_tools_command(
args,
ctx: CLIContext,
selected: List[Repository],
) -> None:
# ------------------------------------------------------------------
# nautilus "explore" command
# ------------------------------------------------------------------
if args.command == "explore":
for repository in selected:
repo_path = _resolve_repository_path(repository, ctx)
run_command(
f'nautilus "{repo_path}" & disown'
)
return
# ------------------------------------------------------------------
# GNOME terminal command
# ------------------------------------------------------------------
if args.command == "terminal":
for repository in selected:
repo_path = _resolve_repository_path(repository, ctx)
run_command(
f'gnome-terminal --tab --working-directory="{repo_path}"'
)
return
# ------------------------------------------------------------------
# VS Code workspace command
# ------------------------------------------------------------------
if args.command == "code":
if not selected:
print("No repositories selected.")
return
identifiers = [
get_repo_identifier(repo, ctx.all_repositories)
for repo in selected
]
sorted_identifiers = sorted(identifiers)
workspace_name = "_".join(sorted_identifiers) + ".code-workspace"
directories_cfg = ctx.config_merged.get("directories") or {}
workspaces_dir = os.path.expanduser(
directories_cfg.get("workspaces", "~/Workspaces")
)
os.makedirs(workspaces_dir, exist_ok=True)
workspace_file = os.path.join(workspaces_dir, workspace_name)
folders = [
{"path": _resolve_repository_path(repository, ctx)}
for repository in selected
]
workspace_data = {
"folders": folders,
"settings": {},
}
if not os.path.exists(workspace_file):
with open(workspace_file, "w", encoding="utf-8") as f:
json.dump(workspace_data, f, indent=4)
print(f"Created workspace file: {workspace_file}")
else:
print(f"Using existing workspace file: {workspace_file}")
run_command(f'code "{workspace_file}"')
return

View File

@@ -4,12 +4,12 @@ import os
import sys
from typing import Any, Dict, List, Optional, Tuple
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.get_repo_dir import get_repo_dir
from pkgmgr.get_repo_identifier import get_repo_identifier
from pkgmgr.git_utils import get_tags
from pkgmgr.versioning import SemVer, find_latest_version
from pkgmgr.version_sources import (
from pkgmgr.cli.context import CLIContext
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.git import get_tags
from pkgmgr.core.version.semver import SemVer, find_latest_version
from pkgmgr.core.version.source import (
read_pyproject_version,
read_flake_version,
read_pkgbuild_version,

178
pkgmgr/cli/dispatch.py Normal file
View File

@@ -0,0 +1,178 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import os
import sys
from typing import List, Dict, Any
from pkgmgr.cli.context import CLIContext
from pkgmgr.cli.proxy import maybe_handle_proxy
from pkgmgr.core.repository.selected import get_selected_repos
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.cli.commands import (
handle_repos_command,
handle_tools_command,
handle_release,
handle_version,
handle_config,
handle_make,
handle_changelog,
handle_branch,
)
def _has_explicit_selection(args) -> bool:
"""
Return True if the user explicitly selected repositories via
identifiers / --all / --category / --tag / --string.
"""
identifiers = getattr(args, "identifiers", []) or []
use_all = getattr(args, "all", False)
categories = getattr(args, "category", []) or []
tags = getattr(args, "tag", []) or []
string_filter = getattr(args, "string", "") or ""
return bool(
use_all
or identifiers
or categories
or tags
or string_filter
)
def _select_repo_for_current_directory(
ctx: CLIContext,
) -> List[Dict[str, Any]]:
"""
Heuristic: find the repository whose local directory matches the
current working directory or is the closest parent.
Example:
- Repo directory: /home/kevin/Repositories/foo
- CWD: /home/kevin/Repositories/foo/subdir
'foo' is selected.
"""
cwd = os.path.abspath(os.getcwd())
candidates: List[tuple[str, Dict[str, Any]]] = []
for repo in ctx.all_repositories:
repo_dir = repo.get("directory")
if not repo_dir:
try:
repo_dir = get_repo_dir(ctx.repositories_base_dir, repo)
except Exception:
repo_dir = None
if not repo_dir:
continue
repo_dir_abs = os.path.abspath(os.path.expanduser(repo_dir))
if cwd == repo_dir_abs or cwd.startswith(repo_dir_abs + os.sep):
candidates.append((repo_dir_abs, repo))
if not candidates:
return []
# Pick the repo with the longest (most specific) path.
candidates.sort(key=lambda item: len(item[0]), reverse=True)
return [candidates[0][1]]
def dispatch_command(args, ctx: CLIContext) -> None:
"""
Dispatch the parsed arguments to the appropriate command handler.
"""
# First: proxy commands (git / docker / docker compose / make wrapper etc.)
if maybe_handle_proxy(args, ctx):
return
# Commands that operate on repository selections
commands_with_selection: List[str] = [
"install",
"update",
"deinstall",
"delete",
"status",
"path",
"shell",
"create",
"list",
"make",
"release",
"version",
"changelog",
"explore",
"terminal",
"code",
]
if getattr(args, "command", None) in commands_with_selection:
if _has_explicit_selection(args):
# Classic selection logic (identifiers / --all / filters)
selected = get_selected_repos(args, ctx.all_repositories)
else:
# Default per help text: repository of current folder.
selected = _select_repo_for_current_directory(ctx)
# If none is found, leave 'selected' empty.
# Individual handlers will then emit a clear message instead
# of silently picking an unrelated repository.
else:
selected = []
# ------------------------------------------------------------------ #
# Repos-related commands
# ------------------------------------------------------------------ #
if args.command in (
"install",
"update",
"deinstall",
"delete",
"status",
"path",
"shell",
"create",
"list",
):
handle_repos_command(args, ctx, selected)
return
# ------------------------------------------------------------------ #
# Tools (explore / terminal / code)
# ------------------------------------------------------------------ #
if args.command in ("explore", "terminal", "code"):
handle_tools_command(args, ctx, selected)
return
# ------------------------------------------------------------------ #
# Release / Version / Changelog / Config / Make / Branch
# ------------------------------------------------------------------ #
if args.command == "release":
handle_release(args, ctx, selected)
return
if args.command == "version":
handle_version(args, ctx, selected)
return
if args.command == "changelog":
handle_changelog(args, ctx, selected)
return
if args.command == "config":
handle_config(args, ctx)
return
if args.command == "make":
handle_make(args, ctx, selected)
return
if args.command == "branch":
handle_branch(args, ctx)
return
print(f"Unknown command: {args.command}")
sys.exit(2)

View File

@@ -1,8 +1,11 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import argparse
from pkgmgr.cli_core.proxy import register_proxy_commands
from pkgmgr.cli.proxy import register_proxy_commands
class SortedSubParsersAction(argparse._SubParsersAction):
@@ -12,13 +15,20 @@ class SortedSubParsersAction(argparse._SubParsersAction):
def add_parser(self, name, **kwargs):
parser = super().add_parser(name, **kwargs)
# Sort choices alphabetically by dest (subcommand name)
self._choices_actions.sort(key=lambda a: a.dest)
return parser
def add_identifier_arguments(subparser: argparse.ArgumentParser) -> None:
"""
Attach generic repository selection arguments to a subparser.
Common identifier / selection arguments for many subcommands.
Selection modes (mutual intent, not hard-enforced):
- identifiers (positional): select by alias / provider/account/repo
- --all: select all repositories
- --category / --string / --tag: filter-based selection on top
of the full repository set
"""
subparser.add_argument(
"identifiers",
@@ -39,6 +49,33 @@ def add_identifier_arguments(subparser: argparse.ArgumentParser) -> None:
"yes | pkgmgr {subcommand} --all"
),
)
subparser.add_argument(
"--category",
nargs="+",
default=[],
help=(
"Filter repositories by category patterns derived from config "
"filenames or repo metadata (use filename without .yml/.yaml, "
"or /regex/ to use a regular expression)."
),
)
subparser.add_argument(
"--string",
default="",
help=(
"Filter repositories whose identifier / name / path contains this "
"substring (case-insensitive). Use /regex/ for regular expressions."
),
)
subparser.add_argument(
"--tag",
action="append",
default=[],
help=(
"Filter repositories by tag. Matches tags from the repository "
"collector and category tags. Use /regex/ for regular expressions."
),
)
subparser.add_argument(
"--preview",
action="store_true",
@@ -61,7 +98,7 @@ def add_identifier_arguments(subparser: argparse.ArgumentParser) -> None:
def add_install_update_arguments(subparser: argparse.ArgumentParser) -> None:
"""
Attach shared flags for install/update-like commands.
Common arguments for install/update commands.
"""
add_identifier_arguments(subparser)
subparser.add_argument(
@@ -94,10 +131,7 @@ def add_install_update_arguments(subparser: argparse.ArgumentParser) -> None:
def create_parser(description_text: str) -> argparse.ArgumentParser:
"""
Create and configure the top-level argument parser for pkgmgr.
This function defines *only* the CLI surface (arguments & subcommands),
but no business logic.
Create the top-level argument parser for pkgmgr.
"""
parser = argparse.ArgumentParser(
description=description_text,
@@ -110,7 +144,7 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
)
# ------------------------------------------------------------
# install / update
# install / update / deinstall / delete
# ------------------------------------------------------------
install_parser = subparsers.add_parser(
"install",
@@ -129,9 +163,6 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
help="Include system update commands",
)
# ------------------------------------------------------------
# deinstall / delete
# ------------------------------------------------------------
deinstall_parser = subparsers.add_parser(
"deinstall",
help="Remove alias links to repository/repositories",
@@ -147,7 +178,7 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
# ------------------------------------------------------------
# create
# ------------------------------------------------------------
create_parser = subparsers.add_parser(
create_cmd_parser = subparsers.add_parser(
"create",
help=(
"Create new repository entries: add them to the config if not "
@@ -155,8 +186,8 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
"remotely if --remote is set."
),
)
add_identifier_arguments(create_parser)
create_parser.add_argument(
add_identifier_arguments(create_cmd_parser)
create_cmd_parser.add_argument(
"--remote",
action="store_true",
help="If set, add the remote and push the initial commit.",
@@ -228,6 +259,14 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
help="Set ignore to true or false",
)
config_subparsers.add_parser(
"update",
help=(
"Update default config files in ~/.config/pkgmgr/ from the "
"installed pkgmgr package (does not touch config.yaml)."
),
)
# ------------------------------------------------------------
# path / explore / terminal / code / shell
# ------------------------------------------------------------
@@ -265,7 +304,10 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
"--command",
nargs=argparse.REMAINDER,
dest="shell_command",
help="The shell command (and its arguments) to execute in each repository",
help=(
"The shell command (and its arguments) to execute in each "
"repository"
),
default=[],
)
@@ -274,7 +316,7 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
# ------------------------------------------------------------
branch_parser = subparsers.add_parser(
"branch",
help="Branch-related utilities (e.g. open feature branches)",
help="Branch-related utilities (e.g. open/close feature branches)",
)
branch_subparsers = branch_parser.add_subparsers(
dest="subcommand",
@@ -289,7 +331,10 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
branch_open.add_argument(
"name",
nargs="?",
help="Name of the new branch (optional; will be asked interactively if omitted)",
help=(
"Name of the new branch (optional; will be asked interactively "
"if omitted)"
),
)
branch_open.add_argument(
"--base",
@@ -297,6 +342,26 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
help="Base branch to create the new branch from (default: main)",
)
branch_close = branch_subparsers.add_parser(
"close",
help="Merge a feature branch into base and delete it",
)
branch_close.add_argument(
"name",
nargs="?",
help=(
"Name of the branch to close (optional; current branch is used "
"if omitted)"
),
)
branch_close.add_argument(
"--base",
default="main",
help=(
"Base branch to merge into (default: main; falls back to master "
"internally if main does not exist)"
),
)
# ------------------------------------------------------------
# release
@@ -316,12 +381,32 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
release_parser.add_argument(
"-m",
"--message",
default="",
default=None,
help=(
"Optional release message to add to the changelog and tag."
),
)
# Generic selection / preview / list / extra_args
add_identifier_arguments(release_parser)
# Close current branch after successful release
release_parser.add_argument(
"--close",
action="store_true",
help=(
"Close the current branch after a successful release in each "
"repository, if it is not main/master."
),
)
# Force: skip preview+confirmation and run release directly
release_parser.add_argument(
"-f",
"--force",
action="store_true",
help=(
"Skip the interactive preview+confirmation step and run the "
"release directly."
),
)
# ------------------------------------------------------------
# version
@@ -330,7 +415,8 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
"version",
help=(
"Show version information for repository/ies "
"(git tags, pyproject.toml, flake.nix, PKGBUILD, debian, spec, Ansible Galaxy)."
"(git tags, pyproject.toml, flake.nix, PKGBUILD, debian, spec, "
"Ansible Galaxy)."
),
)
add_identifier_arguments(version_parser)
@@ -364,20 +450,29 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
"list",
help="List all repositories with details and status",
)
list_parser.add_argument(
"--search",
default="",
help="Filter repositories that contain the given string",
)
# dieselbe Selektionslogik wie bei install/update/etc.:
add_identifier_arguments(list_parser)
list_parser.add_argument(
"--status",
type=str,
default="",
help="Filter repositories by status (case insensitive)",
help=(
"Filter repositories by status (case insensitive). "
"Use /regex/ for regular expressions."
),
)
list_parser.add_argument(
"--description",
action="store_true",
help=(
"Show an additional detailed section per repository "
"(description, homepage, tags, categories, paths)."
),
)
# ------------------------------------------------------------
# make (wrapper around make in repositories)
# make
# ------------------------------------------------------------
make_parser = subparsers.add_parser(
"make",
@@ -403,7 +498,7 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
add_identifier_arguments(make_deinstall)
# ------------------------------------------------------------
# Proxy commands (git, docker, docker compose)
# Proxy commands (git, docker, docker compose, ...)
# ------------------------------------------------------------
register_proxy_commands(subparsers)

View File

@@ -1,14 +1,19 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import argparse
import os
import sys
from typing import Dict, List
from typing import Dict, List, Any
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.clone_repos import clone_repos
from pkgmgr.exec_proxy_command import exec_proxy_command
from pkgmgr.get_selected_repos import get_selected_repos
from pkgmgr.pull_with_verification import pull_with_verification
from pkgmgr.cli.context import CLIContext
from pkgmgr.actions.repository.clone import clone_repos
from pkgmgr.actions.proxy import exec_proxy_command
from pkgmgr.actions.repository.pull import pull_with_verification
from pkgmgr.core.repository.selected import get_selected_repos
from pkgmgr.core.repository.dir import get_repo_dir
PROXY_COMMANDS: Dict[str, List[str]] = {
@@ -42,10 +47,7 @@ PROXY_COMMANDS: Dict[str, List[str]] = {
def _add_proxy_identifier_arguments(parser: argparse.ArgumentParser) -> None:
"""
Local copy of the identifier argument set for proxy commands.
This duplicates the semantics of cli.parser.add_identifier_arguments
to avoid circular imports.
Selection arguments for proxy subcommands.
"""
parser.add_argument(
"identifiers",
@@ -66,6 +68,24 @@ def _add_proxy_identifier_arguments(parser: argparse.ArgumentParser) -> None:
"yes | pkgmgr {subcommand} --all"
),
)
parser.add_argument(
"--category",
nargs="+",
default=[],
help=(
"Filter repositories by category patterns derived from config "
"filenames or repo metadata (use filename without .yml/.yaml, "
"or /regex/ to use a regular expression)."
),
)
parser.add_argument(
"--string",
default="",
help=(
"Filter repositories whose identifier / name / path contains this "
"substring (case-insensitive). Use /regex/ for regular expressions."
),
)
parser.add_argument(
"--preview",
action="store_true",
@@ -86,12 +106,62 @@ def _add_proxy_identifier_arguments(parser: argparse.ArgumentParser) -> None:
)
def _proxy_has_explicit_selection(args: argparse.Namespace) -> bool:
"""
Same semantics as in the main dispatch:
True if the user explicitly selected repositories.
"""
identifiers = getattr(args, "identifiers", []) or []
use_all = getattr(args, "all", False)
categories = getattr(args, "category", []) or []
string_filter = getattr(args, "string", "") or ""
# Proxy commands currently do not support --tag, so it is not checked here.
return bool(
use_all
or identifiers
or categories
or string_filter
)
def _select_repo_for_current_directory(
ctx: CLIContext,
) -> List[Dict[str, Any]]:
"""
Heuristic: find the repository whose local directory matches the
current working directory or is the closest parent.
"""
cwd = os.path.abspath(os.getcwd())
candidates: List[tuple[str, Dict[str, Any]]] = []
for repo in ctx.all_repositories:
repo_dir = repo.get("directory")
if not repo_dir:
try:
repo_dir = get_repo_dir(ctx.repositories_base_dir, repo)
except Exception:
repo_dir = None
if not repo_dir:
continue
repo_dir_abs = os.path.abspath(os.path.expanduser(repo_dir))
if cwd == repo_dir_abs or cwd.startswith(repo_dir_abs + os.sep):
candidates.append((repo_dir_abs, repo))
if not candidates:
return []
# Pick the repo with the longest (most specific) path.
candidates.sort(key=lambda item: len(item[0]), reverse=True)
return [candidates[0][1]]
def register_proxy_commands(
subparsers: argparse._SubParsersAction,
) -> None:
"""
Register proxy commands (git, docker, docker compose) as
top-level subcommands on the given subparsers.
Register proxy subcommands for git, docker, docker compose, ...
"""
for command, subcommands in PROXY_COMMANDS.items():
for subcommand in subcommands:
@@ -100,7 +170,8 @@ def register_proxy_commands(
help=f"Proxies '{command} {subcommand}' to repository/ies",
description=(
f"Executes '{command} {subcommand}' for the "
"identified repos.\nTo recieve more help execute "
"selected repositories. "
"For more details see the underlying tool's help: "
f"'{command} {subcommand} --help'"
),
formatter_class=argparse.RawTextHelpFormatter,
@@ -129,8 +200,8 @@ def register_proxy_commands(
def maybe_handle_proxy(args: argparse.Namespace, ctx: CLIContext) -> bool:
"""
If the parsed command is a proxy command, execute it and return True.
Otherwise return False to let the main dispatcher continue.
If the top-level command is one of the proxy subcommands
(git / docker / docker compose), handle it here and return True.
"""
all_proxy_subcommands = {
sub for subs in PROXY_COMMANDS.values() for sub in subs
@@ -139,12 +210,17 @@ def maybe_handle_proxy(args: argparse.Namespace, ctx: CLIContext) -> bool:
if args.command not in all_proxy_subcommands:
return False
# Use generic selection semantics for proxies
selected = get_selected_repos(
getattr(args, "all", False),
ctx.all_repositories,
getattr(args, "identifiers", []),
)
# Default semantics: without explicit selection → repo of current folder.
if _proxy_has_explicit_selection(args):
selected = get_selected_repos(args, ctx.all_repositories)
else:
selected = _select_repo_for_current_directory(ctx)
if not selected:
print(
"[ERROR] No repository matches the current directory. "
"Specify identifiers or use --all/--category/--string."
)
sys.exit(1)
for command, subcommands in PROXY_COMMANDS.items():
if args.command not in subcommands:

View File

@@ -1,5 +0,0 @@
from .context import CLIContext
from .parser import create_parser
from .dispatch import dispatch_command
__all__ = ["CLIContext", "create_parser", "dispatch_command"]

View File

@@ -1,140 +0,0 @@
from __future__ import annotations
import os
import sys
from typing import Any, Dict, List
import yaml
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.config_init import config_init
from pkgmgr.interactive_add import interactive_add
from pkgmgr.resolve_repos import resolve_repos
from pkgmgr.save_user_config import save_user_config
from pkgmgr.show_config import show_config
from pkgmgr.run_command import run_command
def _load_user_config(user_config_path: str) -> Dict[str, Any]:
"""
Load the user config file, returning a default structure if it does not exist.
"""
if os.path.exists(user_config_path):
with open(user_config_path, "r") as f:
return yaml.safe_load(f) or {"repositories": []}
return {"repositories": []}
def handle_config(args, ctx: CLIContext) -> None:
"""
Handle the 'config' command and its subcommands.
"""
user_config_path = ctx.user_config_path
# --------------------------------------------------------
# config show
# --------------------------------------------------------
if args.subcommand == "show":
if args.all or (not args.identifiers):
show_config([], user_config_path, full_config=True)
else:
selected = resolve_repos(args.identifiers, ctx.all_repositories)
if selected:
show_config(
selected,
user_config_path,
full_config=False,
)
return
# --------------------------------------------------------
# config add
# --------------------------------------------------------
if args.subcommand == "add":
interactive_add(ctx.config_merged, user_config_path)
return
# --------------------------------------------------------
# config edit
# --------------------------------------------------------
if args.subcommand == "edit":
run_command(f"nano {user_config_path}")
return
# --------------------------------------------------------
# config init
# --------------------------------------------------------
if args.subcommand == "init":
user_config = _load_user_config(user_config_path)
config_init(
user_config,
ctx.config_merged,
ctx.binaries_dir,
user_config_path,
)
return
# --------------------------------------------------------
# config delete
# --------------------------------------------------------
if args.subcommand == "delete":
user_config = _load_user_config(user_config_path)
if args.all or not args.identifiers:
print("You must specify identifiers to delete.")
return
to_delete = resolve_repos(
args.identifiers,
user_config.get("repositories", []),
)
new_repos = [
entry
for entry in user_config.get("repositories", [])
if entry not in to_delete
]
user_config["repositories"] = new_repos
save_user_config(user_config, user_config_path)
print(f"Deleted {len(to_delete)} entries from user config.")
return
# --------------------------------------------------------
# config ignore
# --------------------------------------------------------
if args.subcommand == "ignore":
user_config = _load_user_config(user_config_path)
if args.all or not args.identifiers:
print("You must specify identifiers to modify ignore flag.")
return
to_modify = resolve_repos(
args.identifiers,
user_config.get("repositories", []),
)
for entry in user_config["repositories"]:
key = (
entry.get("provider"),
entry.get("account"),
entry.get("repository"),
)
for mod in to_modify:
mod_key = (
mod.get("provider"),
mod.get("account"),
mod.get("repository"),
)
if key == mod_key:
entry["ignore"] = args.set == "true"
print(
f"Set ignore for {key} to {entry['ignore']}"
)
save_user_config(user_config, user_config_path)
return
# If we end up here, something is wrong with subcommand routing
print(f"Unknown config subcommand: {args.subcommand}")
sys.exit(2)

View File

@@ -1,76 +0,0 @@
from __future__ import annotations
import os
import sys
from typing import Any, Dict, List, Optional
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.get_repo_dir import get_repo_dir
from pkgmgr import release as rel
Repository = Dict[str, Any]
def handle_release(
args,
ctx: CLIContext,
selected: List[Repository],
) -> None:
"""
Handle the 'release' command.
Creates a release by incrementing the version and updating the changelog
in a single selected repository.
Important:
- Releases are strictly limited to exactly ONE repository.
- Using --all or specifying multiple identifiers for release does
not make sense and is therefore rejected.
- The --preview flag is respected and passed through to the release
implementation so that no changes are made in preview mode.
"""
if not selected:
print("No repositories selected for release.")
sys.exit(1)
if len(selected) > 1:
print(
"[ERROR] Release operations are limited to a single repository.\n"
"Do not use --all or multiple identifiers with 'pkgmgr release'."
)
sys.exit(1)
original_dir = os.getcwd()
repo = selected[0]
repo_dir: Optional[str] = repo.get("directory")
if not repo_dir:
repo_dir = get_repo_dir(ctx.repositories_base_dir, repo)
if not os.path.isdir(repo_dir):
print(
f"[ERROR] Repository directory does not exist locally: {repo_dir}"
)
sys.exit(1)
pyproject_path = os.path.join(repo_dir, "pyproject.toml")
changelog_path = os.path.join(repo_dir, "CHANGELOG.md")
print(
f"Releasing repository '{repo.get('repository')}' in '{repo_dir}'..."
)
os.chdir(repo_dir)
try:
rel.release(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=args.release_type,
message=args.message,
preview=getattr(args, "preview", False),
)
finally:
os.chdir(original_dir)

View File

@@ -1,161 +0,0 @@
from __future__ import annotations
import sys
from typing import Any, Dict, List
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.install_repos import install_repos
from pkgmgr.deinstall_repos import deinstall_repos
from pkgmgr.delete_repos import delete_repos
from pkgmgr.update_repos import update_repos
from pkgmgr.status_repos import status_repos
from pkgmgr.list_repositories import list_repositories
from pkgmgr.run_command import run_command
from pkgmgr.create_repo import create_repo
Repository = Dict[str, Any]
def handle_repos_command(
args,
ctx: CLIContext,
selected: List[Repository],
) -> None:
"""
Handle repository-related commands:
- install / update / deinstall / delete / status
- path / shell
- create / list
"""
# --------------------------------------------------------
# install / update
# --------------------------------------------------------
if args.command == "install":
install_repos(
selected,
ctx.repositories_base_dir,
ctx.binaries_dir,
ctx.all_repositories,
args.no_verification,
args.preview,
args.quiet,
args.clone_mode,
args.dependencies,
)
return
if args.command == "update":
update_repos(
selected,
ctx.repositories_base_dir,
ctx.binaries_dir,
ctx.all_repositories,
args.no_verification,
args.system,
args.preview,
args.quiet,
args.dependencies,
args.clone_mode,
)
return
# --------------------------------------------------------
# deinstall / delete
# --------------------------------------------------------
if args.command == "deinstall":
deinstall_repos(
selected,
ctx.repositories_base_dir,
ctx.binaries_dir,
ctx.all_repositories,
preview=args.preview,
)
return
if args.command == "delete":
delete_repos(
selected,
ctx.repositories_base_dir,
ctx.all_repositories,
preview=args.preview,
)
return
# --------------------------------------------------------
# status
# --------------------------------------------------------
if args.command == "status":
status_repos(
selected,
ctx.repositories_base_dir,
ctx.all_repositories,
args.extra_args,
list_only=args.list,
system_status=args.system,
preview=args.preview,
)
return
# --------------------------------------------------------
# path
# --------------------------------------------------------
if args.command == "path":
for repository in selected:
print(repository["directory"])
return
# --------------------------------------------------------
# shell
# --------------------------------------------------------
if args.command == "shell":
if not args.shell_command:
print("No shell command specified.")
sys.exit(2)
command_to_run = " ".join(args.shell_command)
for repository in selected:
print(
f"Executing in '{repository['directory']}': {command_to_run}"
)
run_command(
command_to_run,
cwd=repository["directory"],
preview=args.preview,
)
return
# --------------------------------------------------------
# create
# --------------------------------------------------------
if args.command == "create":
if not args.identifiers:
print(
"No identifiers provided. Please specify at least one identifier "
"in the format provider/account/repository."
)
sys.exit(1)
for identifier in args.identifiers:
create_repo(
identifier,
ctx.config_merged,
ctx.user_config_path,
ctx.binaries_dir,
remote=args.remote,
preview=args.preview,
)
return
# --------------------------------------------------------
# list
# --------------------------------------------------------
if args.command == "list":
list_repositories(
ctx.all_repositories,
ctx.repositories_base_dir,
ctx.binaries_dir,
search_filter=args.search,
status_filter=args.status,
)
return

View File

@@ -1,83 +0,0 @@
from __future__ import annotations
import json
import os
from typing import Any, Dict, List
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.run_command import run_command
from pkgmgr.get_repo_identifier import get_repo_identifier
Repository = Dict[str, Any]
def handle_tools_command(
args,
ctx: CLIContext,
selected: List[Repository],
) -> None:
"""
Handle integration commands:
- explore (file manager)
- terminal (GNOME Terminal)
- code (VS Code workspace)
"""
# --------------------------------------------------------
# explore
# --------------------------------------------------------
if args.command == "explore":
for repository in selected:
run_command(
f"nautilus {repository['directory']} & disown"
)
return
# --------------------------------------------------------
# terminal
# --------------------------------------------------------
if args.command == "terminal":
for repository in selected:
run_command(
f'gnome-terminal --tab --working-directory="{repository["directory"]}"'
)
return
# --------------------------------------------------------
# code
# --------------------------------------------------------
if args.command == "code":
if not selected:
print("No repositories selected.")
return
identifiers = [
get_repo_identifier(repo, ctx.all_repositories)
for repo in selected
]
sorted_identifiers = sorted(identifiers)
workspace_name = "_".join(sorted_identifiers) + ".code-workspace"
workspaces_dir = os.path.expanduser(
ctx.config_merged.get("directories").get("workspaces")
)
os.makedirs(workspaces_dir, exist_ok=True)
workspace_file = os.path.join(workspaces_dir, workspace_name)
folders = [{"path": repository["directory"]} for repository in selected]
workspace_data = {
"folders": folders,
"settings": {},
}
if not os.path.exists(workspace_file):
with open(workspace_file, "w") as f:
json.dump(workspace_data, f, indent=4)
print(f"Created workspace file: {workspace_file}")
else:
print(f"Using existing workspace file: {workspace_file}")
run_command(f'code "{workspace_file}"')
return

View File

@@ -1,94 +0,0 @@
from __future__ import annotations
import sys
from typing import List
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.cli_core.proxy import maybe_handle_proxy
from pkgmgr.get_selected_repos import get_selected_repos
from pkgmgr.cli_core.commands import (
handle_repos_command,
handle_tools_command,
handle_release,
handle_version,
handle_config,
handle_make,
handle_changelog,
handle_branch,
)
def dispatch_command(args, ctx: CLIContext) -> None:
"""
Top-level command dispatcher.
Responsible for:
- computing selected repositories (where applicable)
- delegating to the correct command handler module
"""
# 1) Proxy commands (git, docker, docker compose) short-circuit.
if maybe_handle_proxy(args, ctx):
return
# 2) Determine if this command uses repository selection.
commands_with_selection: List[str] = [
"install",
"update",
"deinstall",
"delete",
"status",
"path",
"shell",
"code",
"explore",
"terminal",
"release",
"version",
"make",
"changelog",
# intentionally NOT "branch" it operates on cwd only
]
if args.command in commands_with_selection:
selected = get_selected_repos(
getattr(args, "all", False),
ctx.all_repositories,
getattr(args, "identifiers", []),
)
else:
selected = []
# 3) Delegate based on command.
if args.command in (
"install",
"update",
"deinstall",
"delete",
"status",
"path",
"shell",
"create",
"list",
):
handle_repos_command(args, ctx, selected)
elif args.command in ("code", "explore", "terminal"):
handle_tools_command(args, ctx, selected)
elif args.command == "release":
handle_release(args, ctx, selected)
elif args.command == "version":
handle_version(args, ctx, selected)
elif args.command == "changelog":
handle_changelog(args, ctx, selected)
elif args.command == "config":
handle_config(args, ctx)
elif args.command == "make":
handle_make(args, ctx, selected)
elif args.command == "branch":
# Branch commands currently operate on the current working
# directory only, not on the pkgmgr repository selection.
handle_branch(args, ctx)
else:
print(f"Unknown command: {args.command}")
sys.exit(2)

View File

@@ -1,75 +0,0 @@
import subprocess
import os
from pkgmgr.generate_alias import generate_alias
from pkgmgr.save_user_config import save_user_config
def config_init(user_config, defaults_config, bin_dir,USER_CONFIG_PATH:str):
"""
Scan the base directory (defaults_config["base"]) for repositories.
The folder structure is assumed to be:
{base}/{provider}/{account}/{repository}
For each repository found, automatically determine:
- provider, account, repository from folder names.
- verified: the latest commit (via 'git log -1 --format=%H').
- alias: generated from the repository name using generate_alias().
Repositories already defined in defaults_config["repositories"] or user_config["repositories"] are skipped.
"""
repositories_base_dir = os.path.expanduser(defaults_config["directories"]["repositories"])
if not os.path.isdir(repositories_base_dir):
print(f"Base directory '{repositories_base_dir}' does not exist.")
return
default_keys = {(entry.get("provider"), entry.get("account"), entry.get("repository"))
for entry in defaults_config.get("repositories", [])}
existing_keys = {(entry.get("provider"), entry.get("account"), entry.get("repository"))
for entry in user_config.get("repositories", [])}
existing_aliases = {entry.get("alias") for entry in user_config.get("repositories", []) if entry.get("alias")}
new_entries = []
for provider in os.listdir(repositories_base_dir):
provider_path = os.path.join(repositories_base_dir, provider)
if not os.path.isdir(provider_path):
continue
for account in os.listdir(provider_path):
account_path = os.path.join(provider_path, account)
if not os.path.isdir(account_path):
continue
for repo_name in os.listdir(account_path):
repo_path = os.path.join(account_path, repo_name)
if not os.path.isdir(repo_path):
continue
key = (provider, account, repo_name)
if key in default_keys or key in existing_keys:
continue
try:
result = subprocess.run(
["git", "log", "-1", "--format=%H"],
cwd=repo_path,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
check=True,
)
verified = result.stdout.strip()
except Exception as e:
verified = ""
print(f"Could not determine latest commit for {repo_name} ({provider}/{account}): {e}")
entry = {
"provider": provider,
"account": account,
"repository": repo_name,
"verified": {"commit": verified},
"ignore": True
}
alias = generate_alias({"repository": repo_name, "provider": provider, "account": account}, bin_dir, existing_aliases)
entry["alias"] = alias
existing_aliases.add(alias)
new_entries.append(entry)
print(f"Adding new repo entry: {entry}")
if new_entries:
user_config.setdefault("repositories", []).extend(new_entries)
save_user_config(user_config,USER_CONFIG_PATH)
else:
print("No new repositories found.")

View File

View File

@@ -2,12 +2,18 @@
# -*- coding: utf-8 -*-
import os
from pkgmgr.get_repo_identifier import get_repo_identifier
from pkgmgr.get_repo_dir import get_repo_dir
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.repository.dir import get_repo_dir
def create_ink(repo, repositories_base_dir, bin_dir, all_repos,
quiet=False, preview=False):
def create_ink(
repo,
repositories_base_dir,
bin_dir,
all_repos,
quiet: bool = False,
preview: bool = False,
) -> None:
"""
Create a symlink for the repository's command.
@@ -18,6 +24,11 @@ def create_ink(repo, repositories_base_dir, bin_dir, all_repos,
Behavior:
- If repo["command"] is defined create a symlink to it.
- If repo["command"] is missing or None do NOT create a link.
Safety:
- If the resolved command path is identical to the final link target,
we skip symlink creation to avoid self-referential symlinks that
would break shell resolution ("too many levels of symbolic links").
"""
repo_identifier = get_repo_identifier(repo, all_repos)
@@ -31,6 +42,27 @@ def create_ink(repo, repositories_base_dir, bin_dir, all_repos,
link_path = os.path.join(bin_dir, repo_identifier)
# ------------------------------------------------------------------
# Safety guard: avoid self-referential symlinks
#
# Example of a broken situation we must avoid:
# - command = ~/.local/bin/package-manager
# - link_path = ~/.local/bin/package-manager
# - create_ink() removes the real binary and creates a symlink
# pointing to itself → zsh: too many levels of symbolic links
#
# If the resolved command already lives exactly at the target path,
# we treat it as "already installed" and skip any modification.
# ------------------------------------------------------------------
if os.path.abspath(command) == os.path.abspath(link_path):
if not quiet:
print(
f"[pkgmgr] Command for '{repo_identifier}' already lives at "
f"'{link_path}'. Skipping symlink creation to avoid a "
"self-referential link."
)
return
if preview:
print(f"[Preview] Would link {link_path}{command}")
return
@@ -65,7 +97,10 @@ def create_ink(repo, repositories_base_dir, bin_dir, all_repos,
if alias_name == repo_identifier:
if not quiet:
print(f"Alias '{alias_name}' equals identifier. Skipping alias creation.")
print(
f"Alias '{alias_name}' equals identifier. "
"Skipping alias creation."
)
return
try:

View File

@@ -0,0 +1,207 @@
import os
import shutil
from typing import Optional, List, Dict, Any
Repository = Dict[str, Any]
def _is_executable(path: str) -> bool:
return os.path.exists(path) and os.access(path, os.X_OK)
def _find_python_package_root(repo_dir: str) -> Optional[str]:
"""
Detect a Python src-layout package:
repo_dir/src/<package>/__main__.py
Returns the directory containing __main__.py (e.g. ".../src/arc")
or None if no such structure exists.
"""
src_dir = os.path.join(repo_dir, "src")
if not os.path.isdir(src_dir):
return None
for root, _dirs, files in os.walk(src_dir):
if "__main__.py" in files:
return root
return None
def _nix_binary_candidates(home: str, names: List[str]) -> List[str]:
"""
Build possible Nix profile binary paths for a list of candidate names.
"""
return [
os.path.join(home, ".nix-profile", "bin", name)
for name in names
if name
]
def _path_binary_candidates(names: List[str]) -> List[str]:
"""
Resolve candidate names via PATH using shutil.which.
Returns only existing, executable paths.
"""
binaries: List[str] = []
for name in names:
if not name:
continue
candidate = shutil.which(name)
if candidate and _is_executable(candidate):
binaries.append(candidate)
return binaries
def resolve_command_for_repo(
repo: Repository,
repo_identifier: str,
repo_dir: str,
) -> Optional[str]:
"""
Resolve the executable command for a repository.
Semantics:
----------
- If the repository explicitly defines the key "command" (even if None),
that is treated as authoritative and returned immediately.
This allows e.g.:
command: null
for pure library repositories with no CLI.
- If "command" is not defined, we try to discover a suitable CLI command:
1. Prefer already installed binaries (PATH, Nix profile).
2. For Python src-layout packages (src/*/__main__.py), try to infer
a sensible command name (alias, repo identifier, repository name,
package directory name) and resolve those via PATH / Nix.
3. For script-style repos, fall back to main.sh / main.py.
4. If nothing matches, return None (no CLI) instead of raising.
The caller can interpret:
- str → path to the command (symlink target)
- None → no CLI command for this repository
"""
# ------------------------------------------------------------------
# 1) Explicit command declaration (including explicit "no command")
# ------------------------------------------------------------------
if "command" in repo:
# May be a string path or None. None means: this repo intentionally
# has no CLI command and should not be resolved.
return repo.get("command")
home = os.path.expanduser("~")
# ------------------------------------------------------------------
# 2) Collect candidate names for CLI binaries
#
# Order of preference:
# - repo_identifier (usually alias or configured id)
# - alias (if defined)
# - repository name (e.g. "analysis-ready-code")
# - python package name (e.g. "arc" from src/arc/__main__.py)
# ------------------------------------------------------------------
alias = repo.get("alias")
repository_name = repo.get("repository")
python_package_root = _find_python_package_root(repo_dir)
if python_package_root:
python_package_name = os.path.basename(python_package_root)
else:
python_package_name = None
candidate_names: List[str] = []
seen: set[str] = set()
for name in (
repo_identifier,
alias,
repository_name,
python_package_name,
):
if name and name not in seen:
seen.add(name)
candidate_names.append(name)
# ------------------------------------------------------------------
# 3) Try resolve via PATH (non-system and system) and Nix profile
# ------------------------------------------------------------------
# a) PATH binaries
path_binaries = _path_binary_candidates(candidate_names)
# b) Classify system (/usr/...) vs non-system
system_binary: Optional[str] = None
non_system_binary: Optional[str] = None
for bin_path in path_binaries:
if bin_path.startswith("/usr"):
# Last system binary wins, but usually there is only one anyway
system_binary = bin_path
else:
non_system_binary = bin_path
break # prefer the first non-system binary
# c) Nix profile binaries
nix_binaries = [
path for path in _nix_binary_candidates(home, candidate_names)
if _is_executable(path)
]
nix_binary = nix_binaries[0] if nix_binaries else None
# Decide priority:
# 1) non-system PATH binary (user/venv)
# 2) Nix profile binary
# 3) system binary (/usr/...) → only if we want to expose it
if non_system_binary:
return non_system_binary
if nix_binary:
return nix_binary
if system_binary:
# Respect system packages. Depending on your policy you can decide
# to return None (no symlink, OS owns the command) or to expose it.
# Here we choose: no symlink for pure system binaries.
if repo.get("ignore_system_binary", False):
print(
f"[pkgmgr] System binary for '{repo_identifier}' found at "
f"{system_binary}; no symlink will be created."
)
return None
# ------------------------------------------------------------------
# 4) Script-style repository: fallback to main.sh / main.py
# ------------------------------------------------------------------
main_sh = os.path.join(repo_dir, "main.sh")
main_py = os.path.join(repo_dir, "main.py")
if _is_executable(main_sh):
return main_sh
if os.path.exists(main_py):
return main_py
# ------------------------------------------------------------------
# 5) No CLI discovered
#
# At this point we may still have a Python package structure, but
# without any installed CLI entry point and without main.sh/main.py.
#
# This is perfectly valid for library-only repositories, so we do
# NOT treat this as an error. The caller can then decide to simply
# skip symlink creation.
# ------------------------------------------------------------------
if python_package_root:
print(
f"[INFO] Repository '{repo_identifier}' appears to be a Python "
f"package at '{python_package_root}' but no CLI entry point was "
f"found (PATH, Nix, main.sh/main.py). Treating it as a "
f"library-only repository with no command."
)
return None

View File

305
pkgmgr/core/config/load.py Normal file
View File

@@ -0,0 +1,305 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Load and merge pkgmgr configuration.
Layering rules:
1. Defaults / category files:
- Zuerst werden alle *.yml/*.yaml (außer config.yaml) im
Benutzerverzeichnis geladen:
~/.config/pkgmgr/
- Falls dort keine passenden Dateien existieren, wird auf die im
Paket / Projekt mitgelieferten Config-Verzeichnisse zurückgegriffen:
<pkg_root>/config_defaults
<pkg_root>/config
<project_root>/config_defaults
<project_root>/config
Dabei werden ebenfalls alle *.yml/*.yaml als Layer geladen.
- Der Dateiname ohne Endung (stem) wird als Kategorie-Name
verwendet und in repo["category_files"] eingetragen.
2. User config:
- ~/.config/pkgmgr/config.yaml (oder der übergebene Pfad)
wird geladen und PER LISTEN-MERGE über die Defaults gelegt:
- directories: dict deep-merge
- repositories: per _merge_repo_lists (kein Löschen!)
3. Ergebnis:
- Ein dict mit mindestens:
config["directories"] (dict)
config["repositories"] (list[dict])
"""
from __future__ import annotations
import os
from pathlib import Path
from typing import Any, Dict, List, Tuple
import yaml
Repo = Dict[str, Any]
# ---------------------------------------------------------------------------
# Hilfsfunktionen
# ---------------------------------------------------------------------------
def _deep_merge(base: Dict[str, Any], override: Dict[str, Any]) -> Dict[str, Any]:
"""
Recursively merge two dictionaries.
Values from `override` win over values in `base`.
"""
for key, value in override.items():
if (
key in base
and isinstance(base[key], dict)
and isinstance(value, dict)
):
_deep_merge(base[key], value)
else:
base[key] = value
return base
def _repo_key(repo: Repo) -> Tuple[str, str, str]:
"""
Normalised key for identifying a repository across config files.
"""
return (
str(repo.get("provider", "")),
str(repo.get("account", "")),
str(repo.get("repository", "")),
)
def _merge_repo_lists(
base_list: List[Repo],
new_list: List[Repo],
category_name: str | None = None,
) -> List[Repo]:
"""
Merge two repository lists, matching by (provider, account, repository).
- Wenn ein Repo aus new_list noch nicht existiert, wird es hinzugefügt.
- Wenn es existiert, werden seine Felder per Deep-Merge überschrieben.
- Wenn category_name gesetzt ist, wird dieser in
repo["category_files"] eingetragen.
"""
index: Dict[Tuple[str, str, str], Repo] = {
_repo_key(r): r for r in base_list
}
for src in new_list:
key = _repo_key(src)
if key == ("", "", ""):
# Unvollständiger Schlüssel -> einfach anhängen
dst = dict(src)
if category_name:
dst.setdefault("category_files", [])
if category_name not in dst["category_files"]:
dst["category_files"].append(category_name)
base_list.append(dst)
continue
existing = index.get(key)
if existing is None:
dst = dict(src)
if category_name:
dst.setdefault("category_files", [])
if category_name not in dst["category_files"]:
dst["category_files"].append(category_name)
base_list.append(dst)
index[key] = dst
else:
_deep_merge(existing, src)
if category_name:
existing.setdefault("category_files", [])
if category_name not in existing["category_files"]:
existing["category_files"].append(category_name)
return base_list
def _load_yaml_file(path: Path) -> Dict[str, Any]:
"""
Load a single YAML file as dict. Non-dicts yield {}.
"""
if not path.is_file():
return {}
with path.open("r", encoding="utf-8") as f:
data = yaml.safe_load(f) or {}
if not isinstance(data, dict):
return {}
return data
def _load_layer_dir(
config_dir: Path,
skip_filename: str | None = None,
) -> Dict[str, Any]:
"""
Load all *.yml/*.yaml from a directory as layered defaults.
- skip_filename: Dateiname (z.B. "config.yaml"), der ignoriert
werden soll (z.B. User-Config).
Rückgabe:
{
"directories": {...},
"repositories": [...],
}
"""
defaults: Dict[str, Any] = {"directories": {}, "repositories": []}
if not config_dir.is_dir():
return defaults
yaml_files = [
p
for p in config_dir.iterdir()
if p.is_file()
and p.suffix.lower() in (".yml", ".yaml")
and (skip_filename is None or p.name != skip_filename)
]
if not yaml_files:
return defaults
yaml_files.sort(key=lambda p: p.name)
for path in yaml_files:
data = _load_yaml_file(path)
category_name = path.stem # Dateiname ohne .yml/.yaml
dirs = data.get("directories")
if isinstance(dirs, dict):
defaults.setdefault("directories", {})
_deep_merge(defaults["directories"], dirs)
repos = data.get("repositories")
if isinstance(repos, list):
defaults.setdefault("repositories", [])
_merge_repo_lists(
defaults["repositories"],
repos,
category_name=category_name,
)
return defaults
def _load_defaults_from_package_or_project() -> Dict[str, Any]:
"""
Fallback: Versuche Defaults aus dem installierten Paket ODER
aus dem Projekt-Root zu laden:
<pkg_root>/config_defaults
<pkg_root>/config
<project_root>/config_defaults
<project_root>/config
"""
try:
import pkgmgr # type: ignore
except Exception:
return {"directories": {}, "repositories": []}
pkg_root = Path(pkgmgr.__file__).resolve().parent
project_root = pkg_root.parent
candidates = [
pkg_root / "config_defaults",
pkg_root / "config",
project_root / "config_defaults",
project_root / "config",
]
for cand in candidates:
defaults = _load_layer_dir(cand, skip_filename=None)
if defaults["directories"] or defaults["repositories"]:
return defaults
return {"directories": {}, "repositories": []}
# ---------------------------------------------------------------------------
# Hauptfunktion
# ---------------------------------------------------------------------------
def load_config(user_config_path: str) -> Dict[str, Any]:
"""
Load and merge configuration for pkgmgr.
Schritte:
1. Ermittle ~/.config/pkgmgr/ (oder das Verzeichnis von user_config_path).
2. Lade alle *.yml/*.yaml dort (außer der User-Config selbst) als
Defaults / Kategorie-Layer.
3. Wenn dort nichts gefunden wurde, Fallback auf Paket/Projekt.
4. Lade die User-Config-Datei selbst (falls vorhanden).
5. Merge:
- directories: deep-merge (Defaults <- User)
- repositories: _merge_repo_lists (Defaults <- User)
"""
user_config_path_expanded = os.path.expanduser(user_config_path)
user_cfg_path = Path(user_config_path_expanded)
config_dir = user_cfg_path.parent
if not str(config_dir):
# Fallback, falls jemand nur "config.yaml" übergibt
config_dir = Path(os.path.expanduser("~/.config/pkgmgr"))
config_dir.mkdir(parents=True, exist_ok=True)
user_cfg_name = user_cfg_path.name
# 1+2) Defaults / Kategorie-Layer aus dem User-Verzeichnis
defaults = _load_layer_dir(config_dir, skip_filename=user_cfg_name)
# 3) Falls dort nichts gefunden wurde, Fallback auf Paket/Projekt
if not defaults["directories"] and not defaults["repositories"]:
defaults = _load_defaults_from_package_or_project()
defaults.setdefault("directories", {})
defaults.setdefault("repositories", [])
# 4) User-Config
user_cfg: Dict[str, Any] = {}
if user_cfg_path.is_file():
user_cfg = _load_yaml_file(user_cfg_path)
user_cfg.setdefault("directories", {})
user_cfg.setdefault("repositories", [])
# 5) Merge: directories deep-merge, repositories listen-merge
merged: Dict[str, Any] = {}
# directories
merged["directories"] = {}
_deep_merge(merged["directories"], defaults["directories"])
_deep_merge(merged["directories"], user_cfg["directories"])
# repositories
merged["repositories"] = []
_merge_repo_lists(merged["repositories"], defaults["repositories"], category_name=None)
_merge_repo_lists(merged["repositories"], user_cfg["repositories"], category_name=None)
# andere Top-Level-Keys (falls vorhanden)
other_keys = (set(defaults.keys()) | set(user_cfg.keys())) - {
"directories",
"repositories",
}
for key in other_keys:
base_val = defaults.get(key)
override_val = user_cfg.get(key)
if isinstance(base_val, dict) and isinstance(override_val, dict):
merged[key] = _deep_merge(dict(base_val), override_val)
elif override_val is not None:
merged[key] = override_val
else:
merged[key] = base_val
return merged

View File

View File

@@ -0,0 +1,200 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import os
import re
from typing import Any, Dict, List, Sequence
from pkgmgr.core.repository.resolve import resolve_repos
from pkgmgr.core.repository.ignored import filter_ignored
Repository = Dict[str, Any]
def _compile_maybe_regex(pattern: str):
"""
If pattern is of the form /.../, return a compiled regex (case-insensitive).
Otherwise return None.
"""
if len(pattern) >= 2 and pattern.startswith("/") and pattern.endswith("/"):
try:
return re.compile(pattern[1:-1], re.IGNORECASE)
except re.error:
return None
return None
def _match_pattern(value: str, pattern: str) -> bool:
"""
Match a value against a pattern that may be a substring or /regex/.
"""
if not pattern:
return True
regex = _compile_maybe_regex(pattern)
if regex:
return bool(regex.search(value))
return pattern.lower() in value.lower()
def _match_any(values: Sequence[str], pattern: str) -> bool:
"""
Return True if any of the values matches the pattern.
"""
for v in values:
if _match_pattern(v, pattern):
return True
return False
def _build_identifier_string(repo: Repository) -> str:
"""
Build a combined identifier string for string-based filtering.
"""
provider = str(repo.get("provider", ""))
account = str(repo.get("account", ""))
repository = str(repo.get("repository", ""))
alias = str(repo.get("alias", ""))
description = str(repo.get("description", ""))
directory = str(repo.get("directory", ""))
parts = [
provider,
account,
repository,
alias,
f"{provider}/{account}/{repository}",
description,
directory,
]
return " ".join(p for p in parts if p)
def _apply_filters(
repos: List[Repository],
string_pattern: str,
category_patterns: List[str],
tag_patterns: List[str],
) -> List[Repository]:
if not string_pattern and not category_patterns and not tag_patterns:
return repos
filtered: List[Repository] = []
for repo in repos:
# String filter
if string_pattern:
ident_str = _build_identifier_string(repo)
if not _match_pattern(ident_str, string_pattern):
continue
# Category filter: only real categories, NOT tags
if category_patterns:
cats: List[str] = []
cats.extend(map(str, repo.get("category_files", [])))
if "category" in repo:
cats.append(str(repo["category"]))
if not cats:
continue
ok = True
for pat in category_patterns:
if not _match_any(cats, pat):
ok = False
break
if not ok:
continue
# Tag filter: YAML tags only
if tag_patterns:
tags: List[str] = list(map(str, repo.get("tags", [])))
if not tags:
continue
ok = True
for pat in tag_patterns:
if not _match_any(tags, pat):
ok = False
break
if not ok:
continue
filtered.append(repo)
return filtered
def _maybe_filter_ignored(args, repos: List[Repository]) -> List[Repository]:
"""
Apply ignore filtering unless the caller explicitly opted to include ignored
repositories (via args.include_ignored).
Note: this helper is used only for *implicit* selections (all / filters /
by-directory). For *explicit* identifiers we do NOT filter ignored repos,
so the user can still target them directly if desired.
"""
include_ignored: bool = bool(getattr(args, "include_ignored", False))
if include_ignored:
return repos
return filter_ignored(repos)
def get_selected_repos(args, all_repositories: List[Repository]) -> List[Repository]:
"""
Compute the list of repositories selected by CLI arguments.
Modes:
- If identifiers are given: select via resolve_repos() from all_repositories.
Ignored repositories are *not* filtered here, so explicit identifiers
always win.
- Else if any of --category/--string/--tag is used: start from
all_repositories, apply filters and then drop ignored repos.
- Else if --all is set: select all_repositories and then drop ignored repos.
- Else: try to select the repository of the current working directory
and then drop it if it is ignored.
The ignore filter can be bypassed by setting args.include_ignored = True
(e.g. via a CLI flag --include-ignored).
"""
identifiers: List[str] = getattr(args, "identifiers", []) or []
use_all: bool = bool(getattr(args, "all", False))
category_patterns: List[str] = getattr(args, "category", []) or []
string_pattern: str = getattr(args, "string", "") or ""
tag_patterns: List[str] = getattr(args, "tag", []) or []
has_filters = bool(category_patterns or string_pattern or tag_patterns)
# 1) Explicit identifiers win and bypass ignore filtering
if identifiers:
base = resolve_repos(identifiers, all_repositories)
return _apply_filters(base, string_pattern, category_patterns, tag_patterns)
# 2) Filter-only mode: start from all repositories
if has_filters:
base = _apply_filters(
list(all_repositories),
string_pattern,
category_patterns,
tag_patterns,
)
return _maybe_filter_ignored(args, base)
# 3) --all (no filters): all repos
if use_all:
base = list(all_repositories)
return _maybe_filter_ignored(args, base)
# 4) Fallback: try to select repository of current working directory
cwd = os.path.abspath(os.getcwd())
by_dir = [
repo
for repo in all_repositories
if os.path.abspath(str(repo.get("directory", ""))) == cwd
]
if by_dir:
return _maybe_filter_ignored(args, by_dir)
# No specific match -> empty list
return []

View File

View File

@@ -1,29 +0,0 @@
import os
import sys
from .resolve_repos import resolve_repos
from .filter_ignored import filter_ignored
from .get_repo_dir import get_repo_dir
def get_selected_repos(show_all: bool, all_repos_list, identifiers=None):
if show_all:
selected = all_repos_list
else:
selected = resolve_repos(identifiers, all_repos_list)
# If no repositories were found using the provided identifiers,
# try to automatically select based on the current directory:
if not selected:
current_dir = os.getcwd()
directory_name = os.path.basename(current_dir)
# Pack the directory name in a list since resolve_repos expects a list.
auto_selected = resolve_repos([directory_name], all_repos_list)
if auto_selected:
# Check if the path of the first auto-selected repository matches the current directory.
if os.path.abspath(auto_selected[0].get("directory")) == os.path.abspath(current_dir):
print(f"Repository {auto_selected[0]['repository']} has been auto-selected by path.")
selected = auto_selected
filtered = filter_ignored(selected)
if not filtered:
print("Error: No repositories had been selected.")
sys.exit(4)
return filtered

View File

@@ -1,294 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Repository installation pipeline for pkgmgr.
This module orchestrates the installation of repositories by:
1. Ensuring the repository directory exists (cloning if necessary).
2. Verifying the repository according to the configured policies.
3. Creating executable links using create_ink(), after resolving the
appropriate command via resolve_command_for_repo().
4. Running a sequence of modular installer components that handle
specific technologies or manifests (PKGBUILD, Nix flakes, Python
via pyproject.toml, Makefile, OS-specific package metadata).
The goal is to keep this file thin and delegate most logic to small,
focused installer classes.
"""
import os
from typing import List, Dict, Any
from pkgmgr.get_repo_identifier import get_repo_identifier
from pkgmgr.get_repo_dir import get_repo_dir
from pkgmgr.create_ink import create_ink
from pkgmgr.verify import verify_repository
from pkgmgr.clone_repos import clone_repos
from pkgmgr.context import RepoContext
from pkgmgr.resolve_command import resolve_command_for_repo
# Installer implementations
from pkgmgr.installers.os_packages import (
ArchPkgbuildInstaller,
DebianControlInstaller,
RpmSpecInstaller,
)
from pkgmgr.installers.nix_flake import NixFlakeInstaller
from pkgmgr.installers.python import PythonInstaller
from pkgmgr.installers.makefile import MakefileInstaller
# Layering:
# 1) OS packages: PKGBUILD / debian/control / RPM spec → os-deps.*
# 2) Nix flakes (flake.nix) → e.g. python-runtime, make-install
# 3) Python (pyproject.toml) → e.g. python-runtime, make-install
# 4) Makefile fallback → e.g. make-install
INSTALLERS = [
ArchPkgbuildInstaller(), # Arch
DebianControlInstaller(), # Debian/Ubuntu
RpmSpecInstaller(), # Fedora/RHEL/CentOS
NixFlakeInstaller(), # flake.nix (Nix layer)
PythonInstaller(), # pyproject.toml
MakefileInstaller(), # generic 'make install'
]
def _ensure_repo_dir(
repo: Dict[str, Any],
repositories_base_dir: str,
all_repos: List[Dict[str, Any]],
preview: bool,
no_verification: bool,
clone_mode: str,
identifier: str,
) -> str:
"""
Ensure the repository directory exists. If not, attempt to clone it.
Returns the repository directory path or an empty string if cloning failed.
"""
repo_dir = get_repo_dir(repositories_base_dir, repo)
if not os.path.exists(repo_dir):
print(f"Repository directory '{repo_dir}' does not exist. Cloning it now...")
clone_repos(
[repo],
repositories_base_dir,
all_repos,
preview,
no_verification,
clone_mode,
)
if not os.path.exists(repo_dir):
print(f"Cloning failed for repository {identifier}. Skipping installation.")
return ""
return repo_dir
def _verify_repo(
repo: Dict[str, Any],
repo_dir: str,
no_verification: bool,
identifier: str,
) -> bool:
"""
Verify the repository using verify_repository().
Returns True if installation should proceed, False if it should be skipped.
"""
verified_info = repo.get("verified")
verified_ok, errors, commit_hash, signing_key = verify_repository(
repo,
repo_dir,
mode="local",
no_verification=no_verification,
)
if not no_verification and verified_info and not verified_ok:
print(f"Warning: Verification failed for {identifier}:")
for err in errors:
print(f" - {err}")
choice = input("Proceed with installation? (y/N): ").strip().lower()
if choice != "y":
print(f"Skipping installation for {identifier}.")
return False
return True
def _create_context(
repo: Dict[str, Any],
identifier: str,
repo_dir: str,
repositories_base_dir: str,
bin_dir: str,
all_repos: List[Dict[str, Any]],
no_verification: bool,
preview: bool,
quiet: bool,
clone_mode: str,
update_dependencies: bool,
) -> RepoContext:
"""
Build a RepoContext for the given repository and parameters.
"""
return RepoContext(
repo=repo,
identifier=identifier,
repo_dir=repo_dir,
repositories_base_dir=repositories_base_dir,
bin_dir=bin_dir,
all_repos=all_repos,
no_verification=no_verification,
preview=preview,
quiet=quiet,
clone_mode=clone_mode,
update_dependencies=update_dependencies,
)
def install_repos(
selected_repos: List[Dict[str, Any]],
repositories_base_dir: str,
bin_dir: str,
all_repos: List[Dict[str, Any]],
no_verification: bool,
preview: bool,
quiet: bool,
clone_mode: str,
update_dependencies: bool,
) -> None:
"""
Install repositories by creating symbolic links and processing standard
manifest files (PKGBUILD, flake.nix, Python manifests, Makefile, etc.)
via dedicated installer components.
Any installer failure (SystemExit) is treated as fatal and will abort
the current installation.
"""
for repo in selected_repos:
identifier = get_repo_identifier(repo, all_repos)
repo_dir = _ensure_repo_dir(
repo=repo,
repositories_base_dir=repositories_base_dir,
all_repos=all_repos,
preview=preview,
no_verification=no_verification,
clone_mode=clone_mode,
identifier=identifier,
)
if not repo_dir:
continue
if not _verify_repo(
repo=repo,
repo_dir=repo_dir,
no_verification=no_verification,
identifier=identifier,
):
continue
ctx = _create_context(
repo=repo,
identifier=identifier,
repo_dir=repo_dir,
repositories_base_dir=repositories_base_dir,
bin_dir=bin_dir,
all_repos=all_repos,
no_verification=no_verification,
preview=preview,
quiet=quiet,
clone_mode=clone_mode,
update_dependencies=update_dependencies,
)
# ------------------------------------------------------------
# Resolve the command for this repository before creating the link.
# If no command is resolved, no link will be created.
# ------------------------------------------------------------
resolved_command = resolve_command_for_repo(
repo=repo,
repo_identifier=identifier,
repo_dir=repo_dir,
)
if resolved_command:
repo["command"] = resolved_command
else:
repo.pop("command", None)
# ------------------------------------------------------------
# Create the symlink using create_ink (if a command is set).
# ------------------------------------------------------------
create_ink(
repo,
repositories_base_dir,
bin_dir,
all_repos,
quiet=quiet,
preview=preview,
)
# Track which logical capabilities have already been provided by
# earlier installers for this repository. This allows us to skip
# installers that would only duplicate work (e.g. Python runtime
# already provided by Nix flake → skip pyproject/Makefile).
provided_capabilities: set[str] = set()
# Run all installers that support this repository, but only if they
# provide at least one capability that is not yet satisfied.
for installer in INSTALLERS:
if not installer.supports(ctx):
continue
caps = installer.discover_capabilities(ctx)
# If the installer declares capabilities and *all* of them are
# already provided, we can safely skip it.
if caps and caps.issubset(provided_capabilities):
if not quiet:
print(
f"Skipping installer {installer.__class__.__name__} "
f"for {identifier} capabilities {caps} already provided."
)
continue
# ------------------------------------------------------------
# Debug output + clear error if an installer fails
# ------------------------------------------------------------
if not quiet:
print(
f"[pkgmgr] Running installer {installer.__class__.__name__} "
f"for {identifier} in '{repo_dir}' "
f"(new capabilities: {caps or ''})..."
)
try:
installer.run(ctx)
except SystemExit as exc:
exit_code = exc.code if isinstance(exc.code, int) else str(exc.code)
print(
f"[ERROR] Installer {installer.__class__.__name__} failed "
f"for repository {identifier} (dir: {repo_dir}) "
f"with exit code {exit_code}."
)
print(
"[ERROR] This usually means an underlying command failed "
"(e.g. 'make install', 'nix build', 'pip install', ...)."
)
print(
"[ERROR] Check the log above for the exact command output. "
"You can also run this repository in isolation via:\n"
f" pkgmgr install {identifier} --clone-mode shallow --no-verification"
)
# Re-raise so that CLI/tests fail clearly,
# but now with much more context.
raise
# Only merge capabilities if the installer succeeded
provided_capabilities.update(caps)

View File

@@ -1,19 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Installer package for pkgmgr.
This exposes all installer classes so users can import them directly from
pkgmgr.installers.
"""
from pkgmgr.installers.base import BaseInstaller # noqa: F401
from pkgmgr.installers.nix_flake import NixFlakeInstaller # noqa: F401
from pkgmgr.installers.python import PythonInstaller # noqa: F401
from pkgmgr.installers.makefile import MakefileInstaller # noqa: F401
# OS-specific installers
from pkgmgr.installers.os_packages.arch_pkgbuild import ArchPkgbuildInstaller # noqa: F401
from pkgmgr.installers.os_packages.debian_control import DebianControlInstaller # noqa: F401
from pkgmgr.installers.os_packages.rpm_spec import RpmSpecInstaller # noqa: F401

View File

@@ -1,93 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Installer that triggers `make install` if a Makefile is present and
the Makefile actually defines an 'install' target.
This is useful for repositories that expose a standard Makefile-based
installation step.
"""
import os
import re
from pkgmgr.context import RepoContext
from pkgmgr.installers.base import BaseInstaller
from pkgmgr.run_command import run_command
class MakefileInstaller(BaseInstaller):
"""Run `make install` if a Makefile with an 'install' target exists."""
# Logical layer name, used by capability matchers.
layer = "makefile"
MAKEFILE_NAME = "Makefile"
def supports(self, ctx: RepoContext) -> bool:
"""Return True if a Makefile exists in the repository directory."""
makefile_path = os.path.join(ctx.repo_dir, self.MAKEFILE_NAME)
return os.path.exists(makefile_path)
def _has_install_target(self, makefile_path: str) -> bool:
"""
Check whether the Makefile defines an 'install' target.
We treat the presence of a real install target as either:
- a line starting with 'install:' (optionally preceded by whitespace), or
- a .PHONY line that lists 'install' as one of the targets.
"""
try:
with open(makefile_path, "r", encoding="utf-8", errors="ignore") as f:
content = f.read()
except OSError:
# If we cannot read the Makefile for some reason, assume no target.
return False
# install: ...
if re.search(r"^\s*install\s*:", content, flags=re.MULTILINE):
return True
# .PHONY: ... install ...
if re.search(r"^\s*\.PHONY\s*:\s*.*\binstall\b", content, flags=re.MULTILINE):
return True
return False
def run(self, ctx: RepoContext) -> None:
"""
Execute `make install` in the repository directory, but only if an
'install' target is actually defined in the Makefile.
Any failure in `make install` is treated as a fatal error and will
propagate as SystemExit from run_command().
"""
makefile_path = os.path.join(ctx.repo_dir, self.MAKEFILE_NAME)
if not os.path.exists(makefile_path):
# Should normally not happen if supports() was checked before,
# but keep this guard for robustness.
if not ctx.quiet:
print(
f"[pkgmgr] Makefile '{makefile_path}' not found, "
"skipping make install."
)
return
if not self._has_install_target(makefile_path):
if not ctx.quiet:
print(
"[pkgmgr] Skipping Makefile install: no 'install' target "
f"found in {makefile_path}."
)
return
if not ctx.quiet:
print(
f"[pkgmgr] Running 'make install' in {ctx.repo_dir} "
"(install target detected in Makefile)."
)
cmd = "make install"
run_command(cmd, cwd=ctx.repo_dir, preview=ctx.preview)

View File

@@ -1,106 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Installer for Nix flakes.
If a repository contains flake.nix and the 'nix' command is available, this
installer will try to install profile outputs from the flake.
Behavior:
- If flake.nix is present and `nix` exists on PATH:
* First remove any existing `package-manager` profile entry (best-effort).
* Then install the flake outputs (`pkgmgr`, `default`) via `nix profile install`.
- Failure installing `pkgmgr` is treated as fatal.
- Failure installing `default` is logged as an error/warning but does not abort.
"""
import os
import shutil
from typing import TYPE_CHECKING
from pkgmgr.installers.base import BaseInstaller
from pkgmgr.run_command import run_command
if TYPE_CHECKING:
from pkgmgr.context import RepoContext
from pkgmgr.install_repos import InstallContext
class NixFlakeInstaller(BaseInstaller):
"""Install Nix flake profiles for repositories that define flake.nix."""
# Logical layer name, used by capability matchers.
layer = "nix"
FLAKE_FILE = "flake.nix"
PROFILE_NAME = "package-manager"
def supports(self, ctx: "RepoContext") -> bool:
"""
Only support repositories that:
- Have a flake.nix
- And have the `nix` command available.
"""
if shutil.which("nix") is None:
return False
flake_path = os.path.join(ctx.repo_dir, self.FLAKE_FILE)
return os.path.exists(flake_path)
def _ensure_old_profile_removed(self, ctx: "RepoContext") -> None:
"""
Best-effort removal of an existing profile entry.
This handles the "already provides the following file" conflict by
removing previous `package-manager` installations before we install
the new one.
Any error in `nix profile remove` is intentionally ignored, because
a missing profile entry is not a fatal condition.
"""
if shutil.which("nix") is None:
return
cmd = f"nix profile remove {self.PROFILE_NAME} || true"
try:
# NOTE: no allow_failure here → matches the existing unit tests
run_command(cmd, cwd=ctx.repo_dir, preview=ctx.preview)
except SystemExit:
# Unit tests explicitly assert this is swallowed
pass
def run(self, ctx: "InstallContext") -> None:
"""
Install Nix flake profile outputs (pkgmgr, default).
Any failure installing `pkgmgr` is treated as fatal (SystemExit).
A failure installing `default` is logged but does not abort.
"""
# Reuse supports() to keep logic in one place
if not self.supports(ctx): # type: ignore[arg-type]
return
print("Nix flake detected, attempting to install profile outputs...")
# Handle the "already installed" case up-front:
self._ensure_old_profile_removed(ctx) # type: ignore[arg-type]
for output in ("pkgmgr", "default"):
cmd = f"nix profile install {ctx.repo_dir}#{output}"
try:
# For 'default' we don't want the process to exit on error
allow_failure = output == "default"
run_command(cmd, cwd=ctx.repo_dir, preview=ctx.preview, allow_failure=allow_failure)
print(f"Nix flake output '{output}' successfully installed.")
except SystemExit as e:
print(f"[Error] Failed to install Nix flake output '{output}': {e}")
if output == "pkgmgr":
# Broken main CLI install → fatal
raise
# For 'default' we log and continue
print(
"[Warning] Continuing despite failure to install 'default' "
"because 'pkgmgr' is already installed."
)
break

View File

@@ -1,160 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Installer for RPM-based packages defined in *.spec files.
This installer:
1. Installs build dependencies via dnf/yum builddep (where available)
2. Uses rpmbuild to build RPMs from the provided .spec file
3. Installs the resulting RPMs via `rpm -i`
It targets RPM-based systems (Fedora / RHEL / CentOS / Rocky / Alma, etc.).
"""
import glob
import os
import shutil
from typing import List, Optional
from pkgmgr.context import RepoContext
from pkgmgr.installers.base import BaseInstaller
from pkgmgr.run_command import run_command
class RpmSpecInstaller(BaseInstaller):
"""
Build and install RPM-based packages from *.spec files.
This installer is responsible for the full build + install of the
application on RPM-like systems.
"""
# Logical layer name, used by capability matchers.
layer = "os-packages"
def _is_rpm_like(self) -> bool:
"""
Basic RPM-like detection:
- rpmbuild must be available
- at least one of dnf / yum / yum-builddep must be present
"""
if shutil.which("rpmbuild") is None:
return False
has_dnf = shutil.which("dnf") is not None
has_yum = shutil.which("yum") is not None
has_yum_builddep = shutil.which("yum-builddep") is not None
return has_dnf or has_yum or has_yum_builddep
def _spec_path(self, ctx: RepoContext) -> Optional[str]:
"""Return the first *.spec file in the repository root, if any."""
pattern = os.path.join(ctx.repo_dir, "*.spec")
matches = sorted(glob.glob(pattern))
if not matches:
return None
return matches[0]
def supports(self, ctx: RepoContext) -> bool:
"""
This installer is supported if:
- we are on an RPM-based system (rpmbuild + dnf/yum/yum-builddep available), and
- a *.spec file exists in the repository root.
"""
if not self._is_rpm_like():
return False
return self._spec_path(ctx) is not None
def _find_built_rpms(self) -> List[str]:
"""
Find RPMs built by rpmbuild.
By default, rpmbuild outputs RPMs into:
~/rpmbuild/RPMS/*/*.rpm
"""
home = os.path.expanduser("~")
pattern = os.path.join(home, "rpmbuild", "RPMS", "**", "*.rpm")
return sorted(glob.glob(pattern, recursive=True))
def _install_build_dependencies(self, ctx: RepoContext, spec_path: str) -> None:
"""
Install build dependencies for the given .spec file.
Strategy (best-effort):
1. If dnf is available:
sudo dnf builddep -y <spec>
2. Else if yum-builddep is available:
sudo yum-builddep -y <spec>
3. Else if yum is available:
sudo yum-builddep -y <spec> # Some systems provide it via yum plugin
4. Otherwise: print a warning and skip automatic builddep install.
Any failure in builddep installation is treated as fatal (SystemExit),
consistent with other installer steps.
"""
spec_basename = os.path.basename(spec_path)
if shutil.which("dnf") is not None:
cmd = f"sudo dnf builddep -y {spec_basename}"
elif shutil.which("yum-builddep") is not None:
cmd = f"sudo yum-builddep -y {spec_basename}"
elif shutil.which("yum") is not None:
# Some distributions ship yum-builddep as a plugin.
cmd = f"sudo yum-builddep -y {spec_basename}"
else:
print(
"[Warning] No suitable RPM builddep tool (dnf/yum-builddep/yum) found. "
"Skipping automatic build dependency installation for RPM."
)
return
# Run builddep in the repository directory so relative spec paths work.
run_command(cmd, cwd=ctx.repo_dir, preview=ctx.preview)
def run(self, ctx: RepoContext) -> None:
"""
Build and install RPM-based packages.
Steps:
1. dnf/yum builddep <spec> (automatic build dependency installation)
2. rpmbuild -ba path/to/spec
3. sudo rpm -i ~/rpmbuild/RPMS/*/*.rpm
"""
spec_path = self._spec_path(ctx)
if not spec_path:
return
# 1) Install build dependencies
self._install_build_dependencies(ctx, spec_path)
# 2) Build RPMs
# Use the full spec path, but run in the repo directory.
spec_basename = os.path.basename(spec_path)
build_cmd = f"rpmbuild -ba {spec_basename}"
run_command(build_cmd, cwd=ctx.repo_dir, preview=ctx.preview)
# 3) Find built RPMs
rpms = self._find_built_rpms()
if not rpms:
print(
"[Warning] No RPM files found after rpmbuild. "
"Skipping RPM package installation."
)
return
# 4) Install RPMs
if shutil.which("rpm") is None:
print(
"[Warning] rpm binary not found on PATH. "
"Cannot install built RPMs."
)
return
install_cmd = "sudo rpm -i " + " ".join(rpms)
run_command(install_cmd, cwd=ctx.repo_dir, preview=ctx.preview)

View File

@@ -1,68 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Installer for Python projects based on pyproject.toml.
Strategy:
- Determine a pip command in this order:
1. $PKGMGR_PIP (explicit override, e.g. ~/.venvs/pkgmgr/bin/pip)
2. sys.executable -m pip (current interpreter)
3. "pip" from PATH as last resort
- If pyproject.toml exists: pip install .
All installation failures are treated as fatal errors (SystemExit).
"""
import os
import sys
from pkgmgr.installers.base import BaseInstaller
from pkgmgr.run_command import run_command
class PythonInstaller(BaseInstaller):
"""Install Python projects and dependencies via pip."""
# Logical layer name, used by capability matchers.
layer = "python"
def supports(self, ctx) -> bool:
"""
Return True if this installer should handle the given repository.
Only pyproject.toml is supported as the single source of truth
for Python dependencies and packaging metadata.
"""
repo_dir = ctx.repo_dir
return os.path.exists(os.path.join(repo_dir, "pyproject.toml"))
def _pip_cmd(self) -> str:
"""
Resolve the pip command to use.
"""
explicit = os.environ.get("PKGMGR_PIP", "").strip()
if explicit:
return explicit
if sys.executable:
return f"{sys.executable} -m pip"
return "pip"
def run(self, ctx) -> None:
"""
Install Python project defined via pyproject.toml.
Any pip failure is propagated as SystemExit.
"""
pip_cmd = self._pip_cmd()
pyproject = os.path.join(ctx.repo_dir, "pyproject.toml")
if os.path.exists(pyproject):
print(
f"pyproject.toml found in {ctx.identifier}, "
f"installing Python project..."
)
cmd = f"{pip_cmd} install ."
run_command(cmd, cwd=ctx.repo_dir, preview=ctx.preview)

Some files were not shown because too many files have changed in this diff Show More