Compare commits

...

39 Commits

Author SHA1 Message Date
Kevin Veen-Birkenbach
804245325d Release version 1.2.1
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-12 12:32:33 +01:00
Kevin Veen-Birkenbach
c05e77658a ci(docker): remove build-time nix check and rely on runtime env test
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
Why:
The Dockerfile previously validated `nix --version` during image build,
which is environment-sensitive and behaves differently in GitHub Actions
vs local/act builds due to PATH and non-login shell differences.

The actual contract is runtime availability of Nix, not build-step PATH
resolution. This is now reliably enforced by the dedicated `test-env-nix`
container test, which validates nix presence and flake execution in the
real execution environment.

This removes flaky CI behavior while keeping stronger, more accurate
coverage of the intended guarantee.

https://chatgpt.com/share/693bfbc7-63d8-800f-9ceb-728c7a58e963
2025-12-12 12:25:36 +01:00
Kevin Veen-Birkenbach
324f6db1f3 ci: split container tests into virtualenv and Nix flake environments
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
Refactor CI to clearly separate virtualenv-based container tests from pure Nix flake tests across all distros (arch, debian, ubuntu, fedora, centos).
Introduce dedicated test-env-nix workflow and Makefile targets, rename former container tests to test-env-virtual, and update stable pipeline dependencies.
Improve Nix reliability in containers by fixing installer permissions and explicitly validating nix availability and version during image build and tests.
2025-12-12 12:15:40 +01:00
Kevin Veen-Birkenbach
2a69a83d71 Release version 1.2.0
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-container (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-12 10:27:56 +01:00
Kevin Veen-Birkenbach
0ec4ccbe40 **fix(release): force-fetch remote tags and align tests**
* Treat remote tags as the source of truth by force-fetching tags from *origin*
* Update preview output to reflect the real fetch behavior
* Align unit tests with the new forced tag fetch command

https://chatgpt.com/share/693bdfc3-b8b4-800f-8adc-b1dc63c56a89
2025-12-12 10:26:22 +01:00
Kevin Veen-Birkenbach
0d864867cd **feat(release): adjust highest-tag detection tests and improve logging**
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-container (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
* Add debug output for latest vs current version tag in release git ops
* Treat “no version tags yet” as highest by definition
* Align unit tests with current *string-based* `tag >= latest` behavior
* Make tag listing mocks less brittle by matching command patterns
* Rename release init test to `test_init.py` for consistent discovery
2025-12-12 10:17:18 +01:00
Kevin Veen-Birkenbach
3ff0afe828 feat(release): refactor release workflow, tagging logic, and CLI integration
Refactor the release implementation into a dedicated workflow module with clear separation of concerns. Enforce a safe, deterministic Git flow by always syncing with the remote before modifications, pushing only the current branch and the newly created version tag, and updating the floating *latest* tag only when the released version is the highest. Add explicit user prompts for confirmation and optional branch deletion, with a forced mode to skip interaction. Update CLI wiring to pass all relevant flags, add comprehensive unit tests for the new helpers and workflow entry points, and introduce detailed documentation describing the release process, safety rules, and execution flow.
2025-12-12 10:04:24 +01:00
Kevin Veen-Birkenbach
bd74ad41f9 Release version 1.1.0
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-container (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-12 09:08:22 +01:00
Kevin Veen-Birkenbach
fa2a92481d Merge branch 'main' of github.com:kevinveenbirkenbach/package-manager 2025-12-12 09:08:19 +01:00
Kevin Veen-Birkenbach
6a1e001fc2 test(branch): remove obsolete test_branch.py after branch module refactor
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-container (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
The old test tests/unit/pkgmgr/actions/test_branch.py has been removed because:

- it targeted the previous monolithic pkgmgr.actions.branch module structure
- its patch targets no longer match the refactored code
- its responsibilities are now fully covered by the new, dedicated unit,
  integration, and E2E tests for branch actions and CLI wiring

This avoids redundant coverage and prevents misleading or broken tests
after the branch refactor.

https://chatgpt.com/share/693bcc8d-b84c-800f-8510-8d6c66faf627
2025-12-12 09:04:11 +01:00
Kevin Veen-Birkenbach
60afa92e09 Removed flake.lock 2025-12-12 00:30:17 +01:00
Kevin Veen-Birkenbach
212f3ce5eb Removed _requirements.txt 2025-12-12 00:27:46 +01:00
Kevin Veen-Birkenbach
0d79537033 Added Banner
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-container (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-11 21:01:27 +01:00
Kevin Veen-Birkenbach
72fc69c2f8 Release version 1.0.0
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-container (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-11 20:41:35 +01:00
Kevin Veen-Birkenbach
6d8c6deae8 **refactor(readme): rewrite README for multi-distro focus and Nix-based workflows**
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-container (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
Expanded and modernized the README to reflect PKGMGR's purpose as a
multi-distro development and packaging orchestrator. Added explanations for
Nix-based cross-distro workflows, clarified installation steps, documented the
full CLI capabilities, and embedded the architecture diagram.

Also replaced the verbose CLI DESCRIPTION_TEXT with a concise summary suitable
for `--help` output.

Included updated `assets/map.png`.

https://chatgpt.com/share/693b1d71-ca08-800f-a000-f3be49f7efb5
2025-12-11 20:37:05 +01:00
Kevin Veen-Birkenbach
6c116a029e Release version 0.10.2
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-container (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-11 20:16:59 +01:00
Kevin Veen-Birkenbach
3eb7c81fa1 **Mark stable only on highest version tag**
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-container (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
Updated the `mark-stable` workflow so that the `stable` tag is only moved when:

* the current push is a version tag (`v*`)
* all tests have passed
* the pushed version tag is the highest semantic version among all existing tags

This ensures that `stable` always reflects the latest valid release and prevents older version tags from overwriting it.

https://chatgpt.com/share/693b163b-0c34-800f-adcb-12cf4744dbe2
2025-12-11 20:06:22 +01:00
Kevin Veen-Birkenbach
0334f477fd Release version 0.10.2
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-container (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-11 20:01:29 +01:00
Kevin Veen-Birkenbach
8bb99c99b7 refactor(init-nix): unify installer logic and add robust retry handling
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-container (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
Refactored the Nix initialization script to reduce duplicated code and
centralize the installation workflow. The core functionality remains
unchanged, but all installer calls now use a unified function with retry
support to ensure resilient downloads in CI and container environments.

Key improvements:
- Added download retry logic (5 minutes total, 20-second intervals)
- Consolidated installer invocation into `install_nix_with_retry`
- Reduced code duplication across container/host install paths
- Preserved existing installation behavior for all environments
- Maintained `nixbld` group and build-user handling
- Improved consistency and readability without altering semantics

This prevents intermittent failures such as:
“curl: (6) Could not resolve host: nixos.org”
and ensures stable, deterministic Nix setup in CI pipelines.

https://chatgpt.com/share/693b13ce-fdcc-800f-a7bc-81c67478edff
2025-12-11 19:56:10 +01:00
Kevin Veen-Birkenbach
587cb2e516 Removed comments
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-container (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-11 19:44:36 +01:00
Kevin Veen-Birkenbach
fcf9d4b59b **Aur builder: add retry logic for yay clone to recover from GitHub 504 errors**
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-container (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
Implemented a robust retry mechanism for cloning the yay AUR helper during Arch dependency installation.
The new logic retries the git clone operation for up to 5 minutes with a 20-second pause between attempts, allowing the build to proceed even when GitHub intermittently returns HTTP 504 errors.

This improves the stability of Arch container builds, especially under network pressure or transient upstream outages.
The yay build process now only starts once the clone step completes successfully.

https://chatgpt.com/share/693b102b-fdb0-800f-9f2e-d4840f14d329
2025-12-11 19:40:25 +01:00
Kevin Veen-Birkenbach
b483dbfaad **fix(init-nix): ensure nixbld group/users exist on Ubuntu root-without-systemd installs**
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-container (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
Implement `ensure_nix_build_group()` and use it in all code paths where Nix is installed as root.
This resolves Nix installation failures on Ubuntu containers (root, no systemd) where the installer aborts with:

```
error: the group 'nixbld' specified in 'build-users-group' does not exist
```

The fix standardizes creation of the `nixbld` group and `nixbld1..10` build users across:

* container root mode
* systemd host daemon installs
* root-on-host without systemd (Debian/Ubuntu CI case)

This makes Nix initialization deterministic across all test distros and fixes failing Ubuntu E2E runs.

https://chatgpt.com/share/693b0e1a-e5d4-800f-8a89-7d91108b0368
2025-12-11 19:31:25 +01:00
Kevin Veen-Birkenbach
9630917570 **refactor(nix-flake): replace run_command wrapper with direct os.system execution and extend test coverage**
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-container (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
This commit removes the `run_command`-based execution model for Nix flake
installations and replaces it with a direct `os.system` invocation.
This ensures that *all* Nix diagnostics (stdout/stderr) are fully visible and
no longer suppressed by wrapper logic.

Key changes:

* Directly run `nix profile install` via `os.system` for full error output
* Correctly decode real exit codes via `os.WIFEXITED` / `os.WEXITSTATUS`
* Preserve mandatory/optional behavior for flake outputs
* Update unit tests to the new execution model using `unittest`
* Add complete coverage for:

  * successful installs
  * mandatory failures → raise SystemExit(code)
  * optional failures → warn and continue
  * environment-based disabling via `PKGMGR_DISABLE_NIX_FLAKE_INSTALLER`
* Remove obsolete mocks and legacy test logic that assumed `run_command`

Overall, this improves transparency, debuggability, and correctness of the
Nix flake installer while maintaining full backward compatibility at the
interface level.

https://chatgpt.com/share/693b0a20-99f4-800f-b789-b00a50413612
2025-12-11 19:14:25 +01:00
Kevin Veen-Birkenbach
6a4432dd04 Added required sudo to debian
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-container (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-11 18:42:33 +01:00
Kevin Veen-Birkenbach
cfb91d825a Release version 0.10.1
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-container (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-11 18:38:15 +01:00
Kevin Veen-Birkenbach
a3b21f23fc pkgmgr-wrapper: improve Nix detection and auto-initialization
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-container (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
- Extend PATH probing to include /home/nix/.nix-profile/bin/nix (container mode).
- Automatically invoke init-nix.sh when nix is not found before first run.
- Ensure pkgmgr always attempts a one-time Nix initialization instead of failing prematurely.
- Improve error message to clarify that nix was still missing *after* initialization attempt.
- Keep existing flake-based execution path unchanged (exec nix run …).

This makes the wrapper fully reliable across Debian/Ubuntu package installs,
fresh containers, and minimal systems where Nix is not yet initialized.

https://chatgpt.com/share/693b005d-b250-800f-8830-ab71685f51b3
2025-12-11 18:33:02 +01:00
Kevin Veen-Birkenbach
e49dd85200 Release version 0.10.0
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-container (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-11 18:17:21 +01:00
Kevin Veen-Birkenbach
c9dec5ecd6 Merge branch 'feature/mirror'
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-container (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-11 17:50:53 +01:00
Kevin Veen-Birkenbach
f3c5460e48 feat(mirror): support SSH MIRRORS, multi-push origin and remote probe
Some checks failed
CI / test-unit (push) Has been cancelled
CI / test-integration (push) Has been cancelled
CI / test-container (push) Has been cancelled
CI / test-e2e (push) Has been cancelled
CI / test-virgin-user (push) Has been cancelled
CI / test-virgin-root (push) Has been cancelled
- Switch MIRRORS to SSH-based URLs including custom ports/domains
  (GitHub, git.veen.world, code.cymais.cloud)
- Extend mirror IO:
  - load_config_mirrors filters empty values
  - read_mirrors_file now supports:
    * "name url" lines
    * "url" lines with auto-generated names from URL host (host[:port])
  - write_mirrors_file prints full preview content
- Enhance git_remote:
  - determine_primary_remote_url used for origin bootstrap
  - ensure_origin_remote keeps existing origin URL and
    adds all mirror URLs as additional push URLs
  - add is_remote_reachable() helper based on `git ls-remote --exit-code`
- Implement non-destructive remote mirror checks in setup_cmd:
  - `_probe_mirror()` wraps `git ls-remote` and returns (ok, message)
  - `pkgmgr mirror setup --remote` now probes each mirror URL and
    prints [OK]/[WARN] with details instead of placeholder text
- Add unit tests for mirror actions:
  - test_git_remote: default SSH URL building and primary URL selection
  - test_io: config + MIRRORS parsing including auto-named URL-only entries
  - test_setup_cmd: probe_mirror success/failure handling

https://chatgpt.com/share/693adee0-aa3c-800f-b72a-98473fdaf760
2025-12-11 17:49:31 +01:00
Kevin Veen-Birkenbach
39b16b87a8 CI: Add debugging instrumentation to identify container build/run anomalies
Some checks failed
CI / test-unit (push) Has been cancelled
CI / test-integration (push) Has been cancelled
CI / test-container (push) Has been cancelled
CI / test-e2e (push) Has been cancelled
CI / test-virgin-user (push) Has been cancelled
CI / test-virgin-root (push) Has been cancelled
- Added `git rev-parse HEAD` to test-container workflow to confirm the exact
  commit SHA used during CI runs.
- Updated Dockerfile to print BASE_IMAGE and OS release information during
  build for better reproducibility diagnostics.
- Extended test-container script to dump the first 40 lines of
  `docker image inspect` output, allowing verification of the image ID,
  creation time, and applied build args.

These additions help trace discrepancies between local builds and GitHub
Actions, ensuring we can detect mismatches in commit SHA, base image,
or container metadata.

https://chatgpt.com/share/693ae07a-8c58-800f-88e6-254cdb00b676
2025-12-11 17:27:57 +01:00
Kevin Veen-Birkenbach
26c9d79814 Added mirrors 2025-12-11 16:47:23 +01:00
Kevin Veen-Birkenbach
2776d18a42 Implemented arch support 2025-12-11 16:31:00 +01:00
Kevin Veen-Birkenbach
7057ccfb95 CI: Always rebuild test images with --no-cache before container and E2E tests
This ensures that GitHub Actions never reuses outdated Docker layers and that
each test run starts from a fully clean environment. The workflows for
test-container and test-e2e now explicitly invoke:

    distro="${{ matrix.distro }}" make build-no-cache

before executing the actual tests.
This aligns the CI behaviour with local testing, eliminates hidden caching
differences, and guarantees deterministic test results across all distros.

https://chatgpt.com/share/693ae07a-8c58-800f-88e6-254cdb00b676
2025-12-11 16:17:10 +01:00
Kevin Veen-Birkenbach
1807949c6f Add mirror management commands and refactor CLI parser into modules
- Implement new mirror actions:
  - list_mirrors: show mirrors from config, MIRRORS file, or merged view
  - diff_mirrors: compare config mirrors with MIRRORS file (ONLY IN CONFIG,
    ONLY IN FILE, URL MISMATCH, OK)
  - merge_mirrors: merge mirrors between config and MIRRORS file in both
    directions, with preview mode and user config writing via save_user_config
  - setup_mirrors: prepare local Git remotes (ensure origin) and print
    provider-URL suggestions for remote repositories
- Introduce mirror utilities:
  - RepoMirrorContext with resolved_mirrors (config + file, file wins)
  - load_config_mirrors supporting dict and list-of-dicts shapes
  - read/write MIRRORS file with simple "name url" format and preview mode
  - helper for building default SSH URLs from provider/account/repository
- Wire mirror commands into CLI:
  - Add handle_mirror_command and integrate "mirror" into dispatch
  - Add dedicated CLI parser modules under pkgmgr.cli.parser:
    * common, install_update, config_cmd, navigation_cmd,
      branch_cmd, release_cmd, version_cmd, changelog_cmd,
      list_cmd, make_cmd, mirror_cmd
  - Replace old flat cli/parser.py with modular parser package and
    SortedSubParsersAction in common.py
- Update TODO.md to mark MIRROR as implemented
- Add E2E tests for mirror commands:
  - test_mirror_help
  - test_mirror_list_preview_all
  - test_mirror_diff_preview_all
  - test_mirror_merge_config_to_file_preview_all
  - test_mirror_setup_preview_all

https://chatgpt.com/share/693adee0-aa3c-800f-b72a-98473fdaf760
2025-12-11 16:10:19 +01:00
Kevin Veen-Birkenbach
d611720b8f Solved bug when volumes don't exist 2025-12-11 15:46:45 +01:00
Kevin Veen-Birkenbach
bf871650a8 Added purge option to makefile 2025-12-11 15:29:51 +01:00
Kevin Veen-Birkenbach
5ca1adda7b Refactor CI distro handling and container build scripts
- Introduce a GitHub Actions matrix for `test-container` and `test-e2e`
  to run against arch, debian, ubuntu, fedora, and centos
- Run unit and integration tests only in the Arch container by passing
  `distro="arch"` via make in the corresponding workflows
- Replace the global DISTROS loop with a single `distro` variable in
  the Makefile, defaulting to `arch`, and export it for all scripts
- Update build scripts (build-image, build-image-no-cache, build-image-missing)
  to build images for the selected distro only
- Simplify test-container script to validate a single distro image using
  the `distro` environment variable
- Simplify E2E, unit, and integration test scripts to run against a
  single distro container instead of iterating over all distros

https://chatgpt.com/share/693acbba-9e30-800f-94fb-fea4489e9078
2025-12-11 14:48:36 +01:00
Kevin Veen-Birkenbach
acb18adf76 test: restore Dockerfile ENTRYPOINT for all test runs (fix Nix TLS on CentOS)
All test scripts (unit, integration, e2e) previously overwrote the Docker
ENTRYPOINT by using `--entrypoint bash`, which bypassed the container’s
startup logic in `docker-entry.sh`.

`docker-entry.sh` performs essential initialization steps such as:

- CA bundle auto-detection (NIX_SSL_CERT_FILE, SSL_CERT_FILE, etc.)
- Nix environment setup
- PATH adjustments and distro logging

By removing the explicit `--entrypoint bash` and invoking:

  bash -lc '...'

directly as the container command, the Dockerfile’s ENTRYPOINT is restored
and runs as intended before executing the test logic.

This fixes TLS issues in CentOS E2E runs where Nix was unable to fetch
flake inputs due to missing CA configuration.

https://chatgpt.com/share/693ac1f3-fb7c-800f-9e5c-b40c351a9f04
2025-12-11 14:06:39 +01:00
Kevin Veen-Birkenbach
c18490f5d3 deb: remove hard dependency on distro-provided Nix
The Debian Nix package causes flake builds to fail inside the test and
container environment due to sandboxing and patched Nix behavior.

To ensure consistent behaviour across all distributions and align
container logic with production logic, pkgmgr now relies on its own
`init-nix.sh` bootstrap script instead of the distro’s `nix` package.

Dropping `Depends: nix` guarantees that both Debian containers and real
Debian systems install and initialize Nix via the upstream installer,
matching the behaviour on Arch, Fedora, and Ubuntu.

https://chatgpt.com/share/693ab9bf-e6ac-800f-83ba-a4abd1bfe407
2025-12-11 13:31:56 +01:00
99 changed files with 4943 additions and 2158 deletions

View File

@@ -13,8 +13,11 @@ jobs:
test-integration:
uses: ./.github/workflows/test-integration.yml
test-container:
uses: ./.github/workflows/test-container.yml
test-env-virtual:
uses: ./.github/workflows/test-env-virtual.yml
test-env-nix:
uses: ./.github/workflows/test-env-nix.yml
test-e2e:
uses: ./.github/workflows/test-e2e.yml

View File

@@ -3,7 +3,9 @@ name: Mark stable commit
on:
push:
branches:
- main
- main # still run tests for main
tags:
- 'v*' # run tests for version tags (e.g. v0.9.1)
jobs:
test-unit:
@@ -12,8 +14,11 @@ jobs:
test-integration:
uses: ./.github/workflows/test-integration.yml
test-container:
uses: ./.github/workflows/test-container.yml
test-env-virtual:
uses: ./.github/workflows/test-env-virtual.yml
test-env-nix:
uses: ./.github/workflows/test-env-nix.yml
test-e2e:
uses: ./.github/workflows/test-e2e.yml
@@ -28,37 +33,70 @@ jobs:
needs:
- test-unit
- test-integration
- test-container
- test-env-nix
- test-env-virtual
- test-e2e
- test-virgin-user
- test-virgin-root
runs-on: ubuntu-latest
# Only run this job if the push is for a version tag (v*)
if: startsWith(github.ref, 'refs/tags/v')
permissions:
contents: write # to move the tag
contents: write # Required to move/update the tag
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true # We need all tags for version comparison
- name: Move 'stable' tag to this commit
- name: Move 'stable' tag only if this version is the highest
run: |
set -euo pipefail
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
echo "Tagging commit $GITHUB_SHA as stable…"
echo "Ref: $GITHUB_REF"
echo "SHA: $GITHUB_SHA"
# delete local tag if exists
VERSION="${GITHUB_REF#refs/tags/}"
echo "Current version tag: ${VERSION}"
echo "Collecting all version tags..."
ALL_V_TAGS="$(git tag --list 'v*' || true)"
if [[ -z "${ALL_V_TAGS}" ]]; then
echo "No version tags found. Skipping stable update."
exit 0
fi
echo "All version tags:"
echo "${ALL_V_TAGS}"
# Determine highest version using natural version sorting
LATEST_TAG="$(printf '%s\n' ${ALL_V_TAGS} | sort -V | tail -n1)"
echo "Highest version tag: ${LATEST_TAG}"
if [[ "${VERSION}" != "${LATEST_TAG}" ]]; then
echo "Current version ${VERSION} is NOT the highest version."
echo "Stable tag will NOT be updated."
exit 0
fi
echo "Current version ${VERSION} IS the highest version."
echo "Updating 'stable' tag..."
# Delete existing stable tag (local + remote)
git tag -d stable 2>/dev/null || true
# delete remote tag if exists
git push origin :refs/tags/stable || true
# create new tag on this commit
# Create new stable tag
git tag stable "$GITHUB_SHA"
git push origin stable
echo "✅ Stable tag updated."
echo "✅ Stable tag updated to ${VERSION}."

View File

@@ -1,19 +0,0 @@
name: Test OS Containers
on:
workflow_call:
jobs:
test-container:
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Show Docker version
run: docker version
- name: Run container tests
run: make test-container

View File

@@ -6,7 +6,11 @@ on:
jobs:
test-e2e:
runs-on: ubuntu-latest
timeout-minutes: 60 # E2E + all distros can be heavier
timeout-minutes: 60 # E2E can be heavier
strategy:
fail-fast: false
matrix:
distro: [arch, debian, ubuntu, fedora, centos]
steps:
- name: Checkout repository
@@ -15,5 +19,7 @@ jobs:
- name: Show Docker version
run: docker version
- name: Run E2E tests via make (all distros)
run: make test-e2e
- name: Run E2E tests via make (${{ matrix.distro }})
run: |
set -euo pipefail
distro="${{ matrix.distro }}" make test-e2e

26
.github/workflows/test-env-nix.yml vendored Normal file
View File

@@ -0,0 +1,26 @@
name: Test Virgin Nix (flake only)
on:
workflow_call:
jobs:
test-env-nix:
runs-on: ubuntu-latest
timeout-minutes: 45
strategy:
fail-fast: false
matrix:
distro: [arch, debian, ubuntu, fedora, centos]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Show Docker version
run: docker version
- name: Nix flake-only test (${{ matrix.distro }})
run: |
set -euo pipefail
distro="${{ matrix.distro }}" make test-env-nix

28
.github/workflows/test-env-virtual.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
name: Test OS Containers
on:
workflow_call:
jobs:
test-env-virtual:
runs-on: ubuntu-latest
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
distro: [arch, debian, ubuntu, fedora, centos]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Show commit SHA
run: git rev-parse HEAD
- name: Show Docker version
run: docker version
- name: Run container tests (${{ matrix.distro }})
run: |
set -euo pipefail
distro="${{ matrix.distro }}" make test-env-virtual

View File

@@ -16,4 +16,4 @@ jobs:
run: docker version
- name: Run integration tests via make (Arch container)
run: make test-integration DISTROS="arch"
run: make test-integration distro="arch"

View File

@@ -16,4 +16,4 @@ jobs:
run: docker version
- name: Run unit tests via make (Arch container)
run: make test-unit DISTROS="arch"
run: make test-unit distro="arch"

3
.gitignore vendored
View File

@@ -27,8 +27,9 @@ Thumbs.db
# Nix Cache to speed up tests
.nix/
.nix-dev-installed
flake.lock
# Ignore logs
*.log
result
result

View File

@@ -1,3 +1,130 @@
## [1.2.1] - 2025-12-12
* **Changed**
* Split container tests into *virtualenv* and *Nix flake* environments to clearly separate Python and Nix responsibilities.
**Fixed**
* Fixed Nix installer permission issues when running under a different user in containers.
* Improved reliability of post-install Nix initialization across all distro packages.
**CI**
* Replaced generic container tests with explicit environment checks.
* Validate Nix availability via *nix flake* tests instead of Docker build-time side effects.
## [1.2.0] - 2025-12-12
* **Release workflow overhaul**
* Introduced a fully structured release workflow with clear phases and safeguards
* Added preview-first releases with explicit confirmation before execution
* Automatic handling of *latest* tag when a release is the newest version
* Optional branch closing after successful releases with interactive confirmation
* Improved safety by syncing with remote before any changes
* Clear separation of concerns (workflow, git handling, prompts, versioning)
## [1.1.0] - 2025-12-12
* Added *branch drop* for destructive branch deletion and introduced *--force/-f* flags for branch close and branch drop to skip confirmation prompts.
## [1.0.0] - 2025-12-11
* **1.0.0 Official Stable Release 🎉**
*First stable release of PKGMGR, the multi-distro development and package workflow manager.*
---
**Key Features**
**Core Functionality**
* Manage many repositories with one CLI: `clone`, `update`, `install`, `list`, `path`, `config`
* Proxy wrappers for Git, Docker/Compose and Make
* Multi-repo execution with safe *preview mode*
* Mirror management: `mirror list/diff/merge/setup`
**Releases & Versioning**
* Automated SemVer bumps, tagging and changelog generation
* Supports PKGBUILD, Debian, RPM, pyproject.toml, flake.nix
**Developer Tools**
* Open repositories in VS Code, file manager or terminal
* Unified workflows across all major Linux distros
**Nix Integration**
* Cross-distro reproducible builds via Nix flakes
* CI-tested across all supported environments
---
**Summary**
PKGMGR 1.0.0 unifies repository management, build tooling, release automation and reproducible multi-distro workflows into one cohesive CLI tool.
*This is the first official stable release.*
## [0.10.2] - 2025-12-11
* * Stable tag now updates only when a new highest version is released.
* Debian package now includes sudo to ensure privilege escalation works reliably.
* Nix setup is significantly more resilient with retries, correct permissions, and better environment handling.
* AUR builder setup uses retries so yay installs succeed even under network instability.
* Nix flake installation now fails only on mandatory parts; optional outputs no longer block installation.
## [0.10.1] - 2025-12-11
* Fixed Debian\Ubuntu to pass container e2e tests
## [0.10.0] - 2025-12-11
**Mirror System**
* Added SSH mirror support including multi-push and remote probing
* Introduced mirror management commands and refactored the CLI parser into modules
**CI/CD**
* Migrated to reusable workflows with improved debugging instrumentation
* Made stable-tag automation reliable for workflow_run events and permissions
* Ensured deterministic test results by rebuilding all test containers with no-cache
**E2E and Container Tests**
* Fixed Git safe.directory handling across all containers
* Restored Dockerfile ENTRYPOINT to resolve Nix TLS issues
* Fixed missing volume errors and hardened the E2E runner
* Added full Nix flake E2E test matrix across all distro containers
* Disabled Nix sandboxing for cross-distro builds where required
**Nix and Python Environment**
* Unified Nix Python environment and introduced lazy CLI imports
* Ensured PyYAML availability and improved Python 3.13 compatibility
* Refactored flake.nix to remove side effects and rely on generic python3
**Packaging**
* Removed Debians hard dependency on Nix
* Restructured packaging layout and refined build paths
* Excluded assets from Arch PKGBUILD rsync
* Cleaned up obsolete ignore files
**Repository Layout**
* Restructured repository to align local, Nix-based, and distro-based build workflows
* Added Arch support and refined build/purge scripts
## [0.9.1] - 2025-12-10
* * Refactored installer: new `venv-create.sh`, cleaner root/user setup flow, updated README with architecture map.

View File

@@ -1,9 +1,12 @@
# ------------------------------------------------------------
# Base image selector — overridden by Makefile
# ------------------------------------------------------------
ARG BASE_IMAGE=archlinux:latest
ARG BASE_IMAGE
FROM ${BASE_IMAGE}
RUN echo "BASE_IMAGE=${BASE_IMAGE}" && \
cat /etc/os-release || true
# ------------------------------------------------------------
# Nix environment defaults
#

3
MIRRORS Normal file
View File

@@ -0,0 +1,3 @@
git@github.com:kevinveenbirkenbach/package-manager.git
ssh://git@git.veen.world:2201/kevinveenbirkenbach/pkgmgr.git
ssh://git@code.cymais.cloud:2201/kevinveenbirkenbach/pkgmgr.git

View File

@@ -1,12 +1,17 @@
.PHONY: install setup uninstall \
test build build-no-cache test-unit test-e2e test-integration \
test-container
test build build-no-cache build-no-cache-all build-missing delete-volumes purge \
test-unit test-e2e test-integration test-env-virtual test-env-nix
# Distro
# Options: arch debian ubuntu fedora centos
DISTROS ?= arch debian ubuntu fedora centos
distro ?= arch
export distro
# ------------------------------------------------------------
# Distro list and base images
# Base images
# (kept for documentation/reference; actual build logic is in scripts/build)
# ------------------------------------------------------------
DISTROS := arch debian ubuntu fedora centos
BASE_IMAGE_ARCH := archlinux:latest
BASE_IMAGE_DEBIAN := debian:stable-slim
BASE_IMAGE_UBUNTU := ubuntu:latest
@@ -14,7 +19,6 @@ BASE_IMAGE_FEDORA := fedora:latest
BASE_IMAGE_CENTOS := quay.io/centos/centos:stream9
# Make them available in scripts
export DISTROS
export BASE_IMAGE_ARCH
export BASE_IMAGE_DEBIAN
export BASE_IMAGE_UBUNTU
@@ -34,11 +38,18 @@ setup:
# ------------------------------------------------------------
# Docker build targets (delegated to scripts/build)
# ------------------------------------------------------------
build:
@bash scripts/build/build-image.sh
build-no-cache:
@bash scripts/build/build-image-no-cache.sh
build:
@bash scripts/build/build-image.sh
build-no-cache-all:
@set -e; \
for d in $(DISTROS); do \
echo "=== build-no-cache: $$d ==="; \
distro="$$d" $(MAKE) build-no-cache; \
done
# ------------------------------------------------------------
# Test targets (delegated to scripts/test)
@@ -53,8 +64,12 @@ test-integration: build-missing
test-e2e: build-missing
@bash scripts/test/test-e2e.sh
test-container: build-missing
@bash scripts/test/test-container.sh
test-env-virtual: build-missing
@bash scripts/test/test-env-virtual.sh
test-env-nix: build-missing
@bash scripts/test/test-env-nix.sh
# ------------------------------------------------------------
# Build only missing container images
@@ -63,7 +78,12 @@ build-missing:
@bash scripts/build/build-image-missing.sh
# Combined test target for local + CI (unit + integration + e2e)
test: test-container test-unit test-integration test-e2e
test: test-env-virtual test-unit test-integration test-e2e
delete-volumes:
@docker volume rm pkgmgr_nix_store_${distro} pkgmgr_nix_cache_${distro} || true
purge: delete-volumes build-no-cache
# ------------------------------------------------------------
# System install (native packages, calls scripts/installation/run-package.sh)

180
README.md
View File

@@ -1,70 +1,188 @@
# Package Manager🤖📦
# Package Manager 🤖📦
![PKGMGR Banner](assets/banner.jpg)
[![GitHub Sponsors](https://img.shields.io/badge/Sponsor-GitHub%20Sponsors-blue?logo=github)](https://github.com/sponsors/kevinveenbirkenbach)
[![Patreon](https://img.shields.io/badge/Support-Patreon-orange?logo=patreon)](https://www.patreon.com/c/kevinveenbirkenbach)
[![Buy Me a Coffee](https://img.shields.io/badge/Buy%20me%20a%20Coffee-Funding-yellow?logo=buymeacoffee)](https://buymeacoffee.com/kevinveenbirkenbach) [![PayPal](https://img.shields.io/badge/Donate-PayPal-blue?logo=paypal)](https://s.veen.world/paypaldonate)
[![Patreon](https://img.shields.io/badge/Support-Patreon-orange?logo=patreon)](https://www.patreon.com/c/kevinveenbirkenbach)
[![Buy Me a Coffee](https://img.shields.io/badge/Buy%20me%20a%20Coffee-Funding-yellow?logo=buymeacoffee)](https://buymeacoffee.com/kevinveenbirkenbach)
[![PayPal](https://img.shields.io/badge/Donate-PayPal-blue?logo=paypal)](https://s.veen.world/paypaldonate)
[![GitHub license](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
[![GitHub repo size](https://img.shields.io/github/repo-size/kevinveenbirkenbach/package-manager)](https://github.com/kevinveenbirkenbach/package-manager)
*Kevins's* Package Manager is a configurable Python tool designed to manage multiple repositories via Bash. It automates common Git operations such as clone, pull, push, status, and more. Additionally, it handles the creation of executable wrappers and alias links for your repositories.
**Kevin's Package Manager (PKGMGR)** is a *multi-distro* package manager and workflow orchestrator.
It helps you **develop, package, release and manage projects across multiple Linux-based
operating systems** (Arch, Debian, Ubuntu, Fedora, CentOS, …).
PKGMGR is implemented in **Python** and uses **Nix (flakes)** as a foundation for
distribution-independent builds and tooling. On top of that it provides a rich
CLI that proxies common developer tools (Git, Docker, Make, …) and glues them
together into repeatable development workflows.
---
## Why PKGMGR? 🧠
Traditional distro package managers like `apt`, `pacman` or `dnf` focus on a
single operating system. PKGMGR instead focuses on **your repositories and
development lifecycle**:
* one configuration for all your repos,
* one CLI to interact with them,
* one Nix-based layer to keep tooling reproducible across distros.
You keep using your native package manager where it makes sense PKGMGR
coordinates the *development and release flow* around it.
---
## Features 🚀
- **Installation & Setup:**
Create executable wrappers with auto-detected commands (e.g. `main.sh` or `main.py`).
- **Git Operations:**
Easily perform `git pull`, `push`, `status`, `commit`, `diff`, `add`, `show`, and `checkout` with extra parameters passed through.
- **Configuration Management:**
Manage repository configurations via a default file (`config/defaults.yaml`) and a user-specific file (`config/config.yaml`). Initialize, add, delete, or ignore entries using subcommands.
- **Path & Listing:**
Display repository paths or list all configured packages with their details.
- **Custom Aliases:**
Generate and manage custom aliases for easy command invocation.
### Multi-distro development & packaging
* Manage **many repositories at once** from a single `config/config.yaml`.
* Drive full **release pipelines** across Linux distributions using:
* Nix flakes (`flake.nix`)
* PyPI style builds (`pyproject.toml`)
* OS packages (PKGBUILD, Debian control/changelog, RPM spec)
* Ansible Galaxy metadata and more.
### Rich CLI for daily work
All commands are exposed via the `pkgmgr` CLI and are available on every distro:
* **Repository management**
* `clone`, `update`, `install`, `delete`, `deinstall`, `path`, `list`, `config`
* **Git proxies**
* `pull`, `push`, `status`, `diff`, `add`, `show`, `checkout`,
`reset`, `revert`, `rebase`, `commit`, `branch`
* **Docker & Compose orchestration**
* `build`, `up`, `down`, `exec`, `ps`, `start`, `stop`, `restart`
* **Release toolchain**
* `version`, `release`, `changelog`, `make`
* **Mirror & workflow helpers**
* `mirror` (list/diff/merge/setup), `shell`, `terminal`, `code`, `explore`
Many of these commands support `--preview` mode so you can inspect the
underlying Git or Docker calls without executing them.
### Full development workflows
PKGMGR is not just a helper around Git commands. Combined with its release and
versioning features it can drive **end-to-end workflows**:
1. Clone and mirror repositories.
2. Run tests and builds through `make` or Nix.
3. Bump versions, update changelogs and tags.
4. Build distro-specific packages.
5. Keep all mirrors and working copies in sync.
The extensive E2E tests (`tests/e2e/`) and GitHub Actions workflows (including
“virgin user” and “virgin root” Arch tests) validate these flows across
different Linux environments.
---
## Architecture & Setup Map 🗺️
The following diagram provides a full overview of PKGMGRs package structure,
installation layers, and setup controller flow:
The following diagram gives a full overview of:
* PKGMGRs package structure,
* the layered installers (OS, foundation, Python, Makefile),
* and the setup controller that decides which layer to use on a given system.
![PKGMGR Architecture](assets/map.png)
**Diagram status:** *Stand: 11. Dezember 2025*
**Always-up-to-date version:** https://s.veen.world/pkgmgrmp
**Diagram status:** 12 December 2025
**Always-up-to-date version:** [https://s.veen.world/pkgmgrmp](https://s.veen.world/pkgmgrmp)
---
## Installation ⚙️
Clone the repository and ensure your `~/.local/bin` is in your system PATH:
### 1. Get the latest stable version
For a stable setup, use the **latest tagged release** (the tag pointed to by
`latest`):
```bash
git clone https://github.com/kevinveenbirkenbach/package-manager.git
cd package-manager
# Optional but recommended: checkout the latest stable tag
git fetch --tags
git checkout "$(git describe --tags --abbrev=0)"
```
Install make and pip if not installed yet:
### 2. Install via Make
The project ships with a Makefile that encapsulates the typical installation
flow. On most systems you only need:
```bash
pacman -S make python-pip
# Ensure make, Python and pip are installed via your distro package manager
# (e.g. pacman -S make python python-pip, apt install make python3-pip, ...)
make install
```
Then, run the following command to set up the project:
This will:
* create or reuse a Python virtual environment,
* install PKGMGR (and its Python dependencies) into that environment,
* expose the `pkgmgr` executable on your PATH (usually via `~/.local/bin`),
* prepare Nix-based integration where available so PKGMGR can build and manage
packages distribution-independently.
For development use, you can also run:
```bash
make setup
```
The `make setup` command will:
- Make `main.py` executable.
- Install required packages from `requirements.txt`.
- Execute `python main.py install` to complete the installation.
which prepares the environment and leaves you with a fully wired development
workspace (including Nix, tests and scripts).
---
## Usage 🧰
After installation, the main entry point is:
```bash
pkgmgr --help
```
This prints a list of all available subcommands, for example:
* `pkgmgr list --all` show all repositories in the config
* `pkgmgr update --all --clone-mode https` update every repository
* `pkgmgr release patch --preview` simulate a patch release
* `pkgmgr version --all` show version information for all repositories
* `pkgmgr mirror setup --preview --all` prepare Git mirrors (no changes in preview)
* `pkgmgr make install --preview pkgmgr` preview make install for the pkgmgr repo
The help for each command is available via:
```bash
pkgmgr <command> --help
```
---
## License 📄
This project is licensed under the MIT License.
See the [LICENSE](LICENSE) file for details.
---
## Author 👤
Kevin Veen-Birkenbach
Kevin Veen-Birkenbach
[https://www.veen.world](https://www.veen.world)

View File

@@ -3,5 +3,4 @@
For the following checkout the implementation map:
- Implement TAGS
- Implement MIRROR
- Implement SIGNING_KEY

View File

@@ -1,4 +0,0 @@
# Legacy file used only if pip still installs from requirements.txt.
# You may delete this file once you switch entirely to pyproject.toml.
PyYAML

BIN
assets/banner.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.9 MiB

After

Width:  |  Height:  |  Size: 1.9 MiB

27
flake.lock generated
View File

@@ -1,27 +0,0 @@
{
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1765186076,
"narHash": "sha256-hM20uyap1a0M9d344I692r+ik4gTMyj60cQWO+hAYP8=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "addf7cf5f383a3101ecfba091b98d0a1263dc9b8",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"nixpkgs": "nixpkgs"
}
}
},
"root": "root",
"version": 7
}

View File

@@ -36,7 +36,7 @@
rec {
pkgmgr = pyPkgs.buildPythonApplication {
pname = "package-manager";
version = "0.9.1";
version = "1.2.1";
# Use the git repo as source
src = ./.;

View File

@@ -1,9 +1,9 @@
post_install() {
/usr/lib/package-manager/init-nix.sh || true
/usr/lib/package-manager/init-nix.sh || echo ">>> ERROR: /usr/lib/package-manager/init-nix.sh not found or not executable."
}
post_upgrade() {
/usr/lib/package-manager/init-nix.sh || true
/usr/lib/package-manager/init-nix.sh || echo ">>> ERROR: /usr/lib/package-manager/init-nix.sh not found or not executable."
}
post_remove() {

View File

@@ -9,7 +9,7 @@ Homepage: https://github.com/kevinveenbirkenbach/package-manager
Package: package-manager
Architecture: any
Depends: nix, ${misc:Depends}
Depends: sudo, ${misc:Depends}
Description: Wrapper that runs Kevin's package-manager via Nix flake
This package provides the `pkgmgr` command, which runs Kevin's package
manager via a local Nix flake

View File

@@ -3,11 +3,7 @@ set -e
case "$1" in
configure)
if [ -x /usr/lib/package-manager/init-nix.sh ]; then
/usr/lib/package-manager/init-nix.sh || true
else
echo ">>> Warning: /usr/lib/package-manager/init-nix.sh not found or not executable."
fi
/usr/lib/package-manager/init-nix.sh || echo ">>> ERROR: /usr/lib/package-manager/init-nix.sh not found or not executable."
;;
esac

View File

@@ -60,12 +60,7 @@ rm -rf \
%{buildroot}/usr/lib/package-manager/.gitkeep || true
%post
# Initialize Nix (if needed) after installing the package-manager files.
if [ -x /usr/lib/package-manager/init-nix.sh ]; then
/usr/lib/package-manager/init-nix.sh || true
else
echo ">>> Warning: /usr/lib/package-manager/init-nix.sh not found or not executable."
fi
/usr/lib/package-manager/init-nix.sh || echo ">>> ERROR: /usr/lib/package-manager/init-nix.sh not found or not executable."
%postun
echo ">>> package-manager removed. Nix itself was not removed."

View File

@@ -7,7 +7,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "package-manager"
version = "0.9.1"
version = "1.2.1"
description = "Kevin's package-manager tool (pkgmgr)"
readme = "README.md"
requires-python = ">=3.11"

View File

@@ -4,32 +4,21 @@ set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "${SCRIPT_DIR}/resolve-base-image.sh"
echo "============================================================"
echo ">>> Building ONLY missing container images"
echo "============================================================"
IMAGE="package-manager-test-$distro"
BASE_IMAGE="$(resolve_base_image "$distro")"
for distro in $DISTROS; do
IMAGE="package-manager-test-$distro"
BASE_IMAGE="$(resolve_base_image "$distro")"
if docker image inspect "$IMAGE" >/dev/null 2>&1; then
echo "[build-missing] Image already exists: $IMAGE (skipping)"
continue
fi
echo
echo "------------------------------------------------------------"
echo "[build-missing] Building missing image: $IMAGE"
echo "BASE_IMAGE = $BASE_IMAGE"
echo "------------------------------------------------------------"
docker build \
--build-arg BASE_IMAGE="$BASE_IMAGE" \
-t "$IMAGE" \
.
done
if docker image inspect "$IMAGE" >/dev/null 2>&1; then
echo "[build-missing] Image already exists: $IMAGE (skipping)"
exit 0
fi
echo
echo "============================================================"
echo ">>> build-missing: Done"
echo "============================================================"
echo "------------------------------------------------------------"
echo "[build-missing] Building missing image: $IMAGE"
echo "BASE_IMAGE = $BASE_IMAGE"
echo "------------------------------------------------------------"
docker build \
--build-arg BASE_IMAGE="$BASE_IMAGE" \
-t "$IMAGE" \
.

View File

@@ -4,14 +4,12 @@ set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "${SCRIPT_DIR}/resolve-base-image.sh"
for distro in $DISTROS; do
base_image="$(resolve_base_image "$distro")"
base_image="$(resolve_base_image "$distro")"
echo ">>> Building test image for distro '$distro' with NO CACHE (BASE_IMAGE=$base_image)..."
echo ">>> Building test image for distro '$distro' with NO CACHE (BASE_IMAGE=$base_image)..."
docker build \
--no-cache \
--build-arg BASE_IMAGE="$base_image" \
-t "package-manager-test-$distro" \
.
done
docker build \
--no-cache \
--build-arg BASE_IMAGE="$base_image" \
-t "package-manager-test-$distro" \
.

View File

@@ -4,13 +4,11 @@ set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "${SCRIPT_DIR}/resolve-base-image.sh"
for distro in $DISTROS; do
base_image="$(resolve_base_image "$distro")"
base_image="$(resolve_base_image "$distro")"
echo ">>> Building test image for distro '$distro' (BASE_IMAGE=$base_image)..."
echo ">>> Building test image for distro '$distro' (BASE_IMAGE=$base_image)..."
docker build \
--build-arg BASE_IMAGE="$base_image" \
-t "package-manager-test-$distro" \
.
done
docker build \
--build-arg BASE_IMAGE="$base_image" \
-t "package-manager-test-$distro" \
.

View File

@@ -3,21 +3,22 @@ set -euo pipefail
echo "[init-nix] Starting Nix initialization..."
NIX_INSTALL_URL="${NIX_INSTALL_URL:-https://nixos.org/nix/install}"
NIX_DOWNLOAD_MAX_TIME=300 # 5 minutes
NIX_DOWNLOAD_SLEEP_INTERVAL=20 # 20 seconds
# ---------------------------------------------------------------------------
# Helper: detect whether we are inside a container (Docker/Podman/etc.)
# Detect whether we are inside a container (Docker/Podman/etc.)
# ---------------------------------------------------------------------------
is_container() {
# Docker / Podman markers
if [[ -f /.dockerenv ]] || [[ -f /run/.containerenv ]]; then
return 0
fi
# cgroup hints
if grep -qiE 'docker|container|podman|lxc' /proc/1/cgroup 2>/dev/null; then
return 0
fi
# Environment variable used by some runtimes
if [[ -n "${container:-}" ]]; then
return 0
fi
@@ -26,200 +27,216 @@ is_container() {
}
# ---------------------------------------------------------------------------
# Helper: ensure Nix binaries are on PATH (multi-user or single-user)
# Ensure Nix binaries are on PATH (multi-user or single-user)
# ---------------------------------------------------------------------------
ensure_nix_on_path() {
# Multi-user profile (daemon install)
if [[ -x /nix/var/nix/profiles/default/bin/nix ]]; then
export PATH="/nix/var/nix/profiles/default/bin:${PATH}"
fi
# Single-user profile (current user)
if [[ -x "${HOME}/.nix-profile/bin/nix" ]]; then
export PATH="${HOME}/.nix-profile/bin:${PATH}"
fi
# Single-user profile for dedicated "nix" user (container case)
if [[ -x /home/nix/.nix-profile/bin/nix ]]; then
export PATH="/home/nix/.nix-profile/bin:${PATH}"
fi
}
# ---------------------------------------------------------------------------
# Fast path: Nix already available
# Ensure Nix build group and users exist (build-users-group = nixbld)
# ---------------------------------------------------------------------------
if command -v nix >/dev/null 2>&1; then
echo "[init-nix] Nix already available on PATH: $(command -v nix)"
exit 0
fi
ensure_nix_on_path
if command -v nix >/dev/null 2>&1; then
echo "[init-nix] Nix found after adjusting PATH: $(command -v nix)"
exit 0
fi
echo "[init-nix] Nix not found, starting installation logic..."
IN_CONTAINER=0
if is_container; then
IN_CONTAINER=1
echo "[init-nix] Detected container environment."
else
echo "[init-nix] No container detected."
fi
# ---------------------------------------------------------------------------
# Container + root: install Nix as dedicated "nix" user (single-user)
# ---------------------------------------------------------------------------
if [[ "${IN_CONTAINER}" -eq 1 && "${EUID:-0}" -eq 0 ]]; then
echo "[init-nix] Running as root inside a container using dedicated 'nix' user."
# Ensure nixbld group (required by Nix)
ensure_nix_build_group() {
if ! getent group nixbld >/dev/null 2>&1; then
echo "[init-nix] Creating group 'nixbld'..."
groupadd -r nixbld
fi
# Ensure Nix build users (nixbld1..nixbld10) as members of nixbld
for i in $(seq 1 10); do
if ! id "nixbld$i" >/dev/null 2>&1; then
echo "[init-nix] Creating build user nixbld$i..."
# -r: system account, -g: primary group, -G: supplementary (ensures membership is listed)
useradd -r -g nixbld -G nixbld -s /usr/sbin/nologin "nixbld$i"
fi
done
}
# Ensure "nix" user (home at /home/nix)
if ! id nix >/dev/null 2>&1; then
echo "[init-nix] Creating user 'nix'..."
# Resolve a valid shell path across distros:
# - Debian/Ubuntu: /bin/bash
# - Arch: /usr/bin/bash (often symlinked)
# Fall back to /bin/sh on ultra-minimal systems.
BASH_SHELL="$(command -v bash || true)"
if [[ -z "${BASH_SHELL}" ]]; then
BASH_SHELL="/bin/sh"
# ---------------------------------------------------------------------------
# Download and run Nix installer with retry
# Usage: install_nix_with_retry daemon|no-daemon [run_as_user]
# ---------------------------------------------------------------------------
install_nix_with_retry() {
local mode="$1"
local run_as="${2:-}"
local installer elapsed=0 mode_flag
case "${mode}" in
daemon) mode_flag="--daemon" ;;
no-daemon) mode_flag="--no-daemon" ;;
*)
echo "[init-nix] ERROR: Invalid mode '${mode}', expected 'daemon' or 'no-daemon'."
exit 1
;;
esac
installer="$(mktemp -t nix-installer.XXXXXX)"
# -------------------------------------------------------------------------
# FIX: mktemp creates files with 0600 by default, which breaks when we later
# run the installer as a different user (e.g., 'nix' in container+root).
# Make it readable and (best-effort) owned by the target user.
# -------------------------------------------------------------------------
chmod 0644 "${installer}"
echo "[init-nix] Downloading Nix installer from ${NIX_INSTALL_URL} with retry (max ${NIX_DOWNLOAD_MAX_TIME}s)..."
while true; do
if curl -fL "${NIX_INSTALL_URL}" -o "${installer}"; then
echo "[init-nix] Successfully downloaded Nix installer to ${installer}"
break
fi
useradd -m -r -g nixbld -s "${BASH_SHELL}" nix
fi
# Ensure /nix exists and is writable by the "nix" user.
#
# In some base images (or previous runs), /nix may already exist and be
# owned by root. In that case the Nix single-user installer will abort with:
#
# "directory /nix exists, but is not writable by you"
#
# To keep container runs idempotent and robust, we always enforce
# ownership nix:nixbld here.
if [[ ! -d /nix ]]; then
echo "[init-nix] Creating /nix with owner nix:nixbld..."
mkdir -m 0755 /nix
chown nix:nixbld /nix
else
current_owner="$(stat -c '%U' /nix 2>/dev/null || echo '?')"
current_group="$(stat -c '%G' /nix 2>/dev/null || echo '?')"
if [[ "${current_owner}" != "nix" || "${current_group}" != "nixbld" ]]; then
echo "[init-nix] /nix already exists with owner ${current_owner}:${current_group} fixing to nix:nixbld..."
chown -R nix:nixbld /nix
local curl_exit=$?
echo "[init-nix] WARNING: Failed to download Nix installer (curl exit code ${curl_exit})."
elapsed=$((elapsed + NIX_DOWNLOAD_SLEEP_INTERVAL))
if (( elapsed >= NIX_DOWNLOAD_MAX_TIME )); then
echo "[init-nix] ERROR: Giving up after ${elapsed}s trying to download Nix installer."
rm -f "${installer}"
exit 1
fi
echo "[init-nix] Retrying in ${NIX_DOWNLOAD_SLEEP_INTERVAL}s (elapsed: ${elapsed}s/${NIX_DOWNLOAD_MAX_TIME}s)..."
sleep "${NIX_DOWNLOAD_SLEEP_INTERVAL}"
done
if [[ -n "${run_as}" ]]; then
# Best-effort: ensure the target user can read the downloaded installer
chown "${run_as}:${run_as}" "${installer}" 2>/dev/null || true
echo "[init-nix] Running installer as user '${run_as}' with mode '${mode}'..."
if command -v sudo >/dev/null 2>&1; then
sudo -u "${run_as}" bash -lc "sh '${installer}' ${mode_flag}"
else
echo "[init-nix] /nix already exists with correct owner nix:nixbld."
su - "${run_as}" -c "sh '${installer}' ${mode_flag}"
fi
if [[ ! -w /nix ]]; then
echo "[init-nix] WARNING: /nix is still not writable after chown; Nix installer may fail."
fi
fi
# Run Nix single-user installer as "nix"
echo "[init-nix] Installing Nix as user 'nix' (single-user, --no-daemon)..."
if command -v sudo >/dev/null 2>&1; then
sudo -u nix bash -lc 'sh <(curl -L https://nixos.org/nix/install) --no-daemon'
else
su - nix -c 'sh <(curl -L https://nixos.org/nix/install) --no-daemon'
echo "[init-nix] Running installer as current user with mode '${mode}'..."
sh "${installer}" "${mode_flag}"
fi
# After installation, expose nix to root via PATH and symlink
ensure_nix_on_path
rm -f "${installer}"
}
if [[ -x /home/nix/.nix-profile/bin/nix ]]; then
if [[ ! -e /usr/local/bin/nix ]]; then
echo "[init-nix] Creating /usr/local/bin/nix symlink -> /home/nix/.nix-profile/bin/nix"
ln -s /home/nix/.nix-profile/bin/nix /usr/local/bin/nix
fi
# ---------------------------------------------------------------------------
# Main
# ---------------------------------------------------------------------------
main() {
# Fast path: Nix already available
if command -v nix >/dev/null 2>&1; then
echo "[init-nix] Nix already available on PATH: $(command -v nix)"
return 0
fi
ensure_nix_on_path
if command -v nix >/dev/null 2>&1; then
echo "[init-nix] Nix successfully installed (container mode) at: $(command -v nix)"
else
echo "[init-nix] WARNING: Nix installation finished in container, but 'nix' is still not on PATH."
echo "[init-nix] Nix found after adjusting PATH: $(command -v nix)"
return 0
fi
# Optionally add PATH hints to /etc/profile (best effort)
if [[ -w /etc/profile ]]; then
if ! grep -q 'Nix profiles' /etc/profile 2>/dev/null; then
cat <<'EOF' >> /etc/profile
echo "[init-nix] Nix not found, starting installation logic..."
# Nix profiles (added by package-manager init-nix.sh)
if [ -d /nix/var/nix/profiles/default/bin ]; then
PATH="/nix/var/nix/profiles/default/bin:$PATH"
fi
if [ -d "$HOME/.nix-profile/bin" ]; then
PATH="$HOME/.nix-profile/bin:$PATH"
fi
EOF
echo "[init-nix] Appended Nix PATH setup to /etc/profile (container mode)."
local IN_CONTAINER=0
if is_container; then
IN_CONTAINER=1
echo "[init-nix] Detected container environment."
else
echo "[init-nix] No container detected."
fi
# -------------------------------------------------------------------------
# Container + root: dedicated "nix" user, single-user install
# -------------------------------------------------------------------------
if [[ "${IN_CONTAINER}" -eq 1 && "${EUID:-0}" -eq 0 ]]; then
echo "[init-nix] Container + root installing as 'nix' user (single-user)."
ensure_nix_build_group
if ! id nix >/dev/null 2>&1; then
echo "[init-nix] Creating user 'nix'..."
local BASH_SHELL
BASH_SHELL="$(command -v bash || true)"
[[ -z "${BASH_SHELL}" ]] && BASH_SHELL="/bin/sh"
useradd -m -r -g nixbld -s "${BASH_SHELL}" nix
fi
fi
echo "[init-nix] Nix initialization complete (container root mode)."
exit 0
fi
# ---------------------------------------------------------------------------
# Non-container or non-root container: normal installer paths
# ---------------------------------------------------------------------------
if [[ "${IN_CONTAINER}" -eq 0 ]]; then
# Real host
if command -v systemctl >/dev/null 2>&1; then
echo "[init-nix] Host with systemd using multi-user install (--daemon)."
sh <(curl -L https://nixos.org/nix/install) --daemon
else
if [[ "${EUID:-0}" -eq 0 ]]; then
echo "[init-nix] WARNING: Running as root without systemd on host."
echo "[init-nix] Falling back to single-user install (--no-daemon), but this is not recommended."
sh <(curl -L https://nixos.org/nix/install) --no-daemon
if [[ ! -d /nix ]]; then
echo "[init-nix] Creating /nix with owner nix:nixbld..."
mkdir -m 0755 /nix
chown nix:nixbld /nix
else
echo "[init-nix] Non-root host without systemd using single-user install (--no-daemon)."
sh <(curl -L https://nixos.org/nix/install) --no-daemon
local current_owner current_group
current_owner="$(stat -c '%U' /nix 2>/dev/null || echo '?')"
current_group="$(stat -c '%G' /nix 2>/dev/null || echo '?')"
if [[ "${current_owner}" != "nix" || "${current_group}" != "nixbld" ]]; then
echo "[init-nix] Fixing /nix ownership from ${current_owner}:${current_group} to nix:nixbld..."
chown -R nix:nixbld /nix
fi
if [[ ! -w /nix ]]; then
echo "[init-nix] WARNING: /nix is not writable after chown; Nix installer may fail."
fi
fi
fi
else
install_nix_with_retry "no-daemon" "nix"
ensure_nix_on_path
if [[ -x /home/nix/.nix-profile/bin/nix && ! -e /usr/local/bin/nix ]]; then
echo "[init-nix] Creating /usr/local/bin/nix symlink -> /home/nix/.nix-profile/bin/nix"
ln -s /home/nix/.nix-profile/bin/nix /usr/local/bin/nix
fi
# -------------------------------------------------------------------------
# Host (no container)
# -------------------------------------------------------------------------
elif [[ "${IN_CONTAINER}" -eq 0 ]]; then
if command -v systemctl >/dev/null 2>&1; then
echo "[init-nix] Host with systemd using multi-user install (--daemon)."
if [[ "${EUID:-0}" -eq 0 ]]; then
ensure_nix_build_group
fi
install_nix_with_retry "daemon"
else
if [[ "${EUID:-0}" -eq 0 ]]; then
echo "[init-nix] Host without systemd as root using single-user install (--no-daemon)."
ensure_nix_build_group
else
echo "[init-nix] Host without systemd as non-root using single-user install (--no-daemon)."
fi
install_nix_with_retry "no-daemon"
fi
# -------------------------------------------------------------------------
# Container, but not root (rare)
echo "[init-nix] Container as non-root user using single-user install (--no-daemon)."
sh <(curl -L https://nixos.org/nix/install) --no-daemon
fi
# -------------------------------------------------------------------------
else
echo "[init-nix] Container as non-root using single-user install (--no-daemon)."
install_nix_with_retry "no-daemon"
fi
# ---------------------------------------------------------------------------
# After installation: fix PATH (runtime + shell profiles)
# ---------------------------------------------------------------------------
ensure_nix_on_path
# -------------------------------------------------------------------------
# After installation: PATH + /etc/profile
# -------------------------------------------------------------------------
ensure_nix_on_path
if ! command -v nix >/dev/null 2>&1; then
echo "[init-nix] WARNING: Nix installation finished, but 'nix' is still not on PATH."
echo "[init-nix] You may need to source your shell profile manually."
exit 0
fi
if ! command -v nix >/dev/null 2>&1; then
echo "[init-nix] WARNING: Nix installation finished, but 'nix' is still not on PATH."
echo "[init-nix] You may need to source your shell profile manually."
else
echo "[init-nix] Nix successfully installed at: $(command -v nix)"
fi
echo "[init-nix] Nix successfully installed at: $(command -v nix)"
# Update global /etc/profile if writable (helps especially on minimal systems)
if [[ -w /etc/profile ]]; then
if ! grep -q 'Nix profiles' /etc/profile 2>/dev/null; then
if [[ -w /etc/profile ]] && ! grep -q 'Nix profiles' /etc/profile 2>/dev/null; then
cat <<'EOF' >> /etc/profile
# Nix profiles (added by package-manager init-nix.sh)
@@ -232,6 +249,8 @@ fi
EOF
echo "[init-nix] Appended Nix PATH setup to /etc/profile"
fi
fi
echo "[init-nix] Nix initialization complete."
echo "[init-nix] Nix initialization complete."
}
main "$@"

View File

@@ -45,8 +45,42 @@ else
fi
echo "[aur-builder-setup] Ensuring yay is installed for aur_builder..."
if ! "${RUN_AS_AUR[@]}" 'command -v yay >/dev/null 2>&1'; then
"${RUN_AS_AUR[@]}" 'cd ~ && rm -rf yay && git clone https://aur.archlinux.org/yay.git && cd yay && makepkg -si --noconfirm'
echo "[aur-builder-setup] yay not found starting retry sequence for download..."
MAX_TIME=300
SLEEP_INTERVAL=20
ELAPSED=0
while true; do
if "${RUN_AS_AUR[@]}" '
set -euo pipefail
cd ~
rm -rf yay || true
git clone https://aur.archlinux.org/yay.git yay
'; then
echo "[aur-builder-setup] yay repository cloned successfully."
break
fi
echo "[aur-builder-setup] git clone failed (likely 504). Retrying in ${SLEEP_INTERVAL}s..."
sleep "${SLEEP_INTERVAL}"
ELAPSED=$((ELAPSED + SLEEP_INTERVAL))
if (( ELAPSED >= MAX_TIME )); then
echo "[aur-builder-setup] ERROR: Aborted after 5 minutes of retry attempts."
exit 1
fi
done
# Now build yay after successful clone
"${RUN_AS_AUR[@]}" '
set -euo pipefail
cd ~/yay
makepkg -si --noconfirm
'
else
echo "[aur-builder-setup] yay already installed."
fi

View File

@@ -8,6 +8,12 @@ source "${SCRIPT_DIR}/lib.sh"
OS_ID="$(detect_os_id)"
# Map Manjaro to Arch
if [[ "${OS_ID}" == "manjaro" ]]; then
echo "[run-package] Mapping OS 'manjaro' → 'arch'"
OS_ID="arch"
fi
echo "[run-package] Detected OS: ${OS_ID}"
case "${OS_ID}" in

View File

@@ -3,42 +3,84 @@ set -euo pipefail
# venv-create.sh
#
# Small helper to create/update a Python virtual environment for pkgmgr.
# Create/update a Python virtual environment for pkgmgr and install dependencies.
#
# Priority order:
# 1) pyproject.toml -> pip install (editable by default)
# 2) requirements.txt
# 3) _requirements.txt (legacy)
#
# Usage:
# PKGMGR_VENV_DIR=/home/dev/.venvs/pkgmgr bash scripts/installation/venv-create.sh
# or
# bash scripts/installation/venv-create.sh /home/dev/.venvs/pkgmgr
#
# Optional:
# PKGMGR_PIP_EDITABLE=0 # install non-editable (default: 1)
# PKGMGR_PIP_EXTRAS="dev,test" # install extras: .[dev,test]
# PKGMGR_PREFER_NIX=1 # print Nix hint and exit non-zero
PROJECT_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
cd "${PROJECT_ROOT}"
VENV_DIR="${PKGMGR_VENV_DIR:-${1:-${HOME}/.venvs/pkgmgr}}"
PIP_EDITABLE="${PKGMGR_PIP_EDITABLE:-1}"
PIP_EXTRAS="${PKGMGR_PIP_EXTRAS:-}"
PREFER_NIX="${PKGMGR_PREFER_NIX:-0}"
echo "[venv-create] Using VENV_DIR=${VENV_DIR}"
if [[ "${PREFER_NIX}" == "1" ]]; then
echo "[venv-create] PKGMGR_PREFER_NIX=1 set."
echo "[venv-create] Hint: Use Nix instead of a venv for reproducible installs:"
echo "[venv-create] nix develop"
echo "[venv-create] nix run .#pkgmgr -- --help"
exit 2
fi
echo "[venv-create] Ensuring virtualenv parent directory exists..."
mkdir -p "$(dirname "${VENV_DIR}")"
if [[ ! -d "${VENV_DIR}" ]]; then
echo "[venv-create] Creating virtual environment at: ${VENV_DIR}"
python3 -m venv "${VENV_DIR}"
echo "[venv-create] Creating virtual environment at: ${VENV_DIR}"
python3 -m venv "${VENV_DIR}"
else
echo "[venv-create] Virtual environment already exists at: ${VENV_DIR}"
echo "[venv-create] Virtual environment already exists at: ${VENV_DIR}"
fi
echo "[venv-create] Installing Python tooling into venv..."
"${VENV_DIR}/bin/python" -m ensurepip --upgrade
"${VENV_DIR}/bin/pip" install --upgrade pip setuptools wheel
if [[ -f "requirements.txt" ]]; then
echo "[venv-create] Installing dependencies from requirements.txt..."
"${VENV_DIR}/bin/pip" install -r requirements.txt
# ---------------------------------------------------------------------------
# Install dependencies
# ---------------------------------------------------------------------------
if [[ -f "pyproject.toml" ]]; then
echo "[venv-create] Detected pyproject.toml. Installing project via pip..."
target="."
if [[ -n "${PIP_EXTRAS}" ]]; then
target=".[${PIP_EXTRAS}]"
fi
if [[ "${PIP_EDITABLE}" == "1" ]]; then
echo "[venv-create] pip install -e ${target}"
"${VENV_DIR}/bin/pip" install -e "${target}"
else
echo "[venv-create] pip install ${target}"
"${VENV_DIR}/bin/pip" install "${target}"
fi
elif [[ -f "requirements.txt" ]]; then
echo "[venv-create] Installing dependencies from requirements.txt..."
"${VENV_DIR}/bin/pip" install -r requirements.txt
elif [[ -f "_requirements.txt" ]]; then
echo "[venv-create] Installing dependencies from _requirements.txt..."
"${VENV_DIR}/bin/pip" install -r _requirements.txt
echo "[venv-create] Installing dependencies from _requirements.txt (legacy)..."
"${VENV_DIR}/bin/pip" install -r _requirements.txt
else
echo "[venv-create] No requirements.txt or _requirements.txt found. Skipping dependency installation."
echo "[venv-create] No pyproject.toml, requirements.txt, or _requirements.txt found. Skipping dependency installation."
fi
echo "[venv-create] Done."

View File

@@ -8,19 +8,18 @@ fi
FLAKE_DIR="/usr/lib/package-manager"
# ------------------------------------------------------------
# Try to ensure that "nix" is on PATH
# ------------------------------------------------------------
# ---------------------------------------------------------------------------
# Try to ensure that "nix" is on PATH (common locations + container user)
# ---------------------------------------------------------------------------
if ! command -v nix >/dev/null 2>&1; then
# Common locations for Nix installations
CANDIDATES=(
"/nix/var/nix/profiles/default/bin/nix"
"${HOME:-/root}/.nix-profile/bin/nix"
"/home/nix/.nix-profile/bin/nix"
)
for candidate in "${CANDIDATES[@]}"; do
if [[ -x "$candidate" ]]; then
# Prepend the directory of the candidate to PATH
PATH="$(dirname "$candidate"):${PATH}"
export PATH
break
@@ -28,13 +27,22 @@ if ! command -v nix >/dev/null 2>&1; then
done
fi
# ------------------------------------------------------------
# Primary (and only) path: use Nix flake if available
# ------------------------------------------------------------
# ---------------------------------------------------------------------------
# If nix is still missing, try to run init-nix.sh once
# ---------------------------------------------------------------------------
if ! command -v nix >/dev/null 2>&1; then
if [[ -x "${FLAKE_DIR}/init-nix.sh" ]]; then
"${FLAKE_DIR}/init-nix.sh" || true
fi
fi
# ---------------------------------------------------------------------------
# Primary path: use Nix flake if available
# ---------------------------------------------------------------------------
if command -v nix >/dev/null 2>&1; then
exec nix run "${FLAKE_DIR}#pkgmgr" -- "$@"
fi
echo "[pkgmgr-wrapper] ERROR: 'nix' binary not found on PATH."
echo "[pkgmgr-wrapper] ERROR: 'nix' binary not found on PATH after init."
echo "[pkgmgr-wrapper] Nix is required to run pkgmgr (no Python fallback)."
exit 1

View File

@@ -1,41 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
echo "============================================================"
echo ">>> Running sanity test: verifying test containers start"
echo "============================================================"
for distro in $DISTROS; do
IMAGE="package-manager-test-$distro"
echo
echo "------------------------------------------------------------"
echo ">>> Testing container: $IMAGE"
echo "------------------------------------------------------------"
echo "[test-container] Running: docker run --rm --entrypoint pkgmgr $IMAGE --help"
echo
# Run the command and capture the output
if OUTPUT=$(docker run --rm \
-e PKGMGR_DEV=1 \
-v pkgmgr_nix_store_${distro}:/nix \
-v "$(pwd):/src" \
-v "pkgmgr_nix_cache_${distro}:/root/.cache/nix" \
"$IMAGE" 2>&1); then
echo "$OUTPUT"
echo
echo "[test-container] SUCCESS: $IMAGE responded to 'pkgmgr --help'"
else
echo "$OUTPUT"
echo
echo "[test-container] ERROR: $IMAGE failed to run 'pkgmgr --help'"
exit 1
fi
done
echo
echo "============================================================"
echo ">>> All containers passed the sanity check"
echo "============================================================"

View File

@@ -1,65 +1,60 @@
#!/usr/bin/env bash
set -euo pipefail
echo ">>> Running E2E tests in all distros: $DISTROS"
echo "============================================================"
echo ">>> Running E2E tests: $distro"
echo "============================================================"
for distro in $DISTROS; do
echo "============================================================"
echo ">>> Running E2E tests: $distro"
echo "============================================================"
docker run --rm \
-v "$(pwd):/src" \
-v "pkgmgr_nix_store_${distro}:/nix" \
-v "pkgmgr_nix_cache_${distro}:/root/.cache/nix" \
-e PKGMGR_DEV=1 \
-e TEST_PATTERN="${TEST_PATTERN}" \
--workdir /src \
"package-manager-test-${distro}" \
bash -lc '
set -euo pipefail
docker run --rm \
-v "$(pwd):/src" \
-v "pkgmgr_nix_store_${distro}:/nix" \
-v "pkgmgr_nix_cache_${distro}:/root/.cache/nix" \
-e PKGMGR_DEV=1 \
-e TEST_PATTERN="${TEST_PATTERN}" \
--workdir /src \
--entrypoint bash \
"package-manager-test-${distro}" \
-c '
set -euo pipefail
# Load distro info
if [ -f /etc/os-release ]; then
. /etc/os-release
fi
# Load distro info
if [ -f /etc/os-release ]; then
. /etc/os-release
fi
echo "Running tests inside distro: ${ID:-unknown}"
echo "Running tests inside distro: ${ID:-unknown}"
# Load Nix environment if available
if [ -f "/nix/var/nix/profiles/default/etc/profile.d/nix-daemon.sh" ]; then
. "/nix/var/nix/profiles/default/etc/profile.d/nix-daemon.sh"
fi
# Load Nix environment if available
if [ -f "/nix/var/nix/profiles/default/etc/profile.d/nix-daemon.sh" ]; then
. "/nix/var/nix/profiles/default/etc/profile.d/nix-daemon.sh"
fi
if [ -f "$HOME/.nix-profile/etc/profile.d/nix.sh" ]; then
. "$HOME/.nix-profile/etc/profile.d/nix.sh"
fi
if [ -f "$HOME/.nix-profile/etc/profile.d/nix.sh" ]; then
. "$HOME/.nix-profile/etc/profile.d/nix.sh"
fi
PATH="/nix/var/nix/profiles/default/bin:$HOME/.nix-profile/bin:$PATH"
PATH="/nix/var/nix/profiles/default/bin:$HOME/.nix-profile/bin:$PATH"
command -v nix >/dev/null || {
echo "ERROR: nix not found."
exit 1
}
command -v nix >/dev/null || {
echo "ERROR: nix not found."
exit 1
}
# Mark the mounted repository as safe to avoid Git ownership errors.
# Newer Git (e.g. on Ubuntu) complains about the gitdir (/src/.git),
# older versions about the worktree (/src). Nix turns "." into the
# flake input "git+file:///src", which then uses Git under the hood.
if command -v git >/dev/null 2>&1; then
# Worktree path
git config --global --add safe.directory /src || true
# Gitdir path shown in the "dubious ownership" error
git config --global --add safe.directory /src/.git || true
# Ephemeral CI containers: allow all paths as a last resort
git config --global --add safe.directory '*' || true
fi
# Mark the mounted repository as safe to avoid Git ownership errors.
# Newer Git (e.g. on Ubuntu) complains about the gitdir (/src/.git),
# older versions about the worktree (/src). Nix turns "." into the
# flake input "git+file:///src", which then uses Git under the hood.
if command -v git >/dev/null 2>&1; then
# Worktree path
git config --global --add safe.directory /src || true
# Gitdir path shown in the "dubious ownership" error
git config --global --add safe.directory /src/.git || true
# Ephemeral CI containers: allow all paths as a last resort
git config --global --add safe.directory '*' || true
fi
# Run the E2E tests inside the Nix development shell
nix develop .#default --no-write-lock-file -c \
python3 -m unittest discover \
-s /src/tests/e2e \
-p "$TEST_PATTERN"
'
done
# Run the E2E tests inside the Nix development shell
nix develop .#default --no-write-lock-file -c \
python3 -m unittest discover \
-s /src/tests/e2e \
-p "$TEST_PATTERN"
'

View File

@@ -0,0 +1,48 @@
#!/usr/bin/env bash
set -euo pipefail
IMAGE="package-manager-test-${distro}"
echo "============================================================"
echo ">>> Running Nix flake-only test in ${distro} container"
echo ">>> Image: ${IMAGE}"
echo "============================================================"
docker run --rm \
-v "$(pwd):/src" \
-v "pkgmgr_nix_store_${distro}:/nix" \
-v "pkgmgr_nix_cache_${distro}:/root/.cache/nix" \
--workdir /src \
-e PKGMGR_DEV=0 \
"${IMAGE}" \
bash -lc '
set -euo pipefail
if command -v git >/dev/null 2>&1; then
git config --global --add safe.directory /src || true
git config --global --add safe.directory /src/.git || true
git config --global --add safe.directory "*" || true
fi
echo ">>> preflight: nix must exist in image"
if ! command -v nix >/dev/null 2>&1; then
echo "NO_NIX"
echo "ERROR: nix not found in image '\'''"${IMAGE}"''\'' (distro='"${distro}"')"
echo "HINT: Ensure Nix is installed during image build for this distro."
exit 1
fi
echo ">>> nix version"
nix --version
echo ">>> nix flake show"
nix flake show . --no-write-lock-file >/dev/null
echo ">>> nix build .#default"
nix build .#default --no-link --no-write-lock-file
echo ">>> nix run .#pkgmgr -- --help"
nix run .#pkgmgr -- --help --no-write-lock-file
echo ">>> OK: Nix flake-only test succeeded."
'

View File

@@ -0,0 +1,32 @@
#!/usr/bin/env bash
set -euo pipefail
IMAGE="package-manager-test-$distro"
echo
echo "------------------------------------------------------------"
echo ">>> Testing VENV: $IMAGE"
echo "------------------------------------------------------------"
echo "[test-env-virtual] Inspect image metadata:"
docker image inspect "$IMAGE" | sed -n '1,40p'
echo "[test-env-virtual] Running: docker run --rm --entrypoint pkgmgr $IMAGE --help"
echo
# Run the command and capture the output
if OUTPUT=$(docker run --rm \
-e PKGMGR_DEV=1 \
-v pkgmgr_nix_store_${distro}:/nix \
-v "$(pwd):/src" \
-v "pkgmgr_nix_cache_${distro}:/root/.cache/nix" \
"$IMAGE" 2>&1); then
echo "$OUTPUT"
echo
echo "[test-env-virtual] SUCCESS: $IMAGE responded to 'pkgmgr --help'"
else
echo "$OUTPUT"
echo
echo "[test-env-virtual] ERROR: $IMAGE failed to run 'pkgmgr --help'"
exit 1
fi

View File

@@ -1,8 +1,6 @@
#!/usr/bin/env bash
set -euo pipefail
: "${distro:=arch}"
echo "============================================================"
echo ">>> Running INTEGRATION tests in ${distro} container"
echo "============================================================"
@@ -14,9 +12,8 @@ docker run --rm \
--workdir /src \
-e PKGMGR_DEV=1 \
-e TEST_PATTERN="${TEST_PATTERN}" \
--entrypoint bash \
"package-manager-test-${distro}" \
-c '
bash -lc '
set -e;
git config --global --add safe.directory /src || true;
nix develop .#default --no-write-lock-file -c \

View File

@@ -1,8 +1,6 @@
#!/usr/bin/env bash
set -euo pipefail
: "${distro:=arch}"
echo "============================================================"
echo ">>> Running UNIT tests in ${distro} container"
echo "============================================================"
@@ -14,9 +12,8 @@ docker run --rm \
--workdir /src \
-e PKGMGR_DEV=1 \
-e TEST_PATTERN="${TEST_PATTERN}" \
--entrypoint bash \
"package-manager-test-${distro}" \
-c '
bash -lc '
set -e;
git config --global --add safe.directory /src || true;
nix develop .#default --no-write-lock-file -c \

View File

@@ -1,235 +1,14 @@
# pkgmgr/actions/branch/__init__.py
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
High-level helpers for branch-related operations.
This module encapsulates the actual Git logic so the CLI layer
(pkgmgr.cli.commands.branch) stays thin and testable.
Public API for branch actions.
"""
from __future__ import annotations
from .open_branch import open_branch
from .close_branch import close_branch
from .drop_branch import drop_branch
from typing import Optional
from pkgmgr.core.git import run_git, GitError, get_current_branch
# ---------------------------------------------------------------------------
# Branch creation (open)
# ---------------------------------------------------------------------------
def open_branch(
name: Optional[str],
base_branch: str = "main",
fallback_base: str = "master",
cwd: str = ".",
) -> None:
"""
Create and push a new feature branch on top of a base branch.
The base branch is resolved by:
1. Trying 'base_branch' (default: 'main')
2. Falling back to 'fallback_base' (default: 'master')
Steps:
1) git fetch origin
2) git checkout <resolved_base>
3) git pull origin <resolved_base>
4) git checkout -b <name>
5) git push -u origin <name>
If `name` is None or empty, the user is prompted to enter one.
"""
# Request name interactively if not provided
if not name:
name = input("Enter new branch name: ").strip()
if not name:
raise RuntimeError("Branch name must not be empty.")
# Resolve which base branch to use (main or master)
resolved_base = _resolve_base_branch(base_branch, fallback_base, cwd=cwd)
# 1) Fetch from origin
try:
run_git(["fetch", "origin"], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to fetch from origin before creating branch {name!r}: {exc}"
) from exc
# 2) Checkout base branch
try:
run_git(["checkout", resolved_base], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to checkout base branch {resolved_base!r}: {exc}"
) from exc
# 3) Pull latest changes for base branch
try:
run_git(["pull", "origin", resolved_base], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to pull latest changes for base branch {resolved_base!r}: {exc}"
) from exc
# 4) Create new branch
try:
run_git(["checkout", "-b", name], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to create new branch {name!r} from base {resolved_base!r}: {exc}"
) from exc
# 5) Push new branch to origin
try:
run_git(["push", "-u", "origin", name], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to push new branch {name!r} to origin: {exc}"
) from exc
# ---------------------------------------------------------------------------
# Base branch resolver (shared by open/close)
# ---------------------------------------------------------------------------
def _resolve_base_branch(
preferred: str,
fallback: str,
cwd: str,
) -> str:
"""
Resolve the base branch to use.
Try `preferred` first (default: main),
fall back to `fallback` (default: master).
Raise RuntimeError if neither exists.
"""
for candidate in (preferred, fallback):
try:
run_git(["rev-parse", "--verify", candidate], cwd=cwd)
return candidate
except GitError:
continue
raise RuntimeError(
f"Neither {preferred!r} nor {fallback!r} exist in this repository."
)
# ---------------------------------------------------------------------------
# Branch closing (merge + deletion)
# ---------------------------------------------------------------------------
def close_branch(
name: Optional[str],
base_branch: str = "main",
fallback_base: str = "master",
cwd: str = ".",
) -> None:
"""
Merge a feature branch into the base branch and delete it afterwards.
Steps:
1) Determine the branch name (argument or current branch)
2) Resolve base branch (main/master)
3) Ask for confirmation
4) git fetch origin
5) git checkout <base>
6) git pull origin <base>
7) git merge --no-ff <name>
8) git push origin <base>
9) Delete branch locally
10) Delete branch on origin (best effort)
"""
# 1) Determine which branch should be closed
if not name:
try:
name = get_current_branch(cwd=cwd)
except GitError as exc:
raise RuntimeError(f"Failed to detect current branch: {exc}") from exc
if not name:
raise RuntimeError("Branch name must not be empty.")
# 2) Resolve base branch
target_base = _resolve_base_branch(base_branch, fallback_base, cwd=cwd)
if name == target_base:
raise RuntimeError(
f"Refusing to close base branch {target_base!r}. "
"Please specify a feature branch."
)
# 3) Ask user for confirmation
prompt = (
f"Merge branch '{name}' into '{target_base}' and delete it afterwards? "
"(y/N): "
)
answer = input(prompt).strip().lower()
if answer != "y":
print("Aborted closing branch.")
return
# 4) Fetch from origin
try:
run_git(["fetch", "origin"], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to fetch from origin before closing branch {name!r}: {exc}"
) from exc
# 5) Checkout base
try:
run_git(["checkout", target_base], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to checkout base branch {target_base!r}: {exc}"
) from exc
# 6) Pull latest base state
try:
run_git(["pull", "origin", target_base], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to pull latest changes for base branch {target_base!r}: {exc}"
) from exc
# 7) Merge the feature branch
try:
run_git(["merge", "--no-ff", name], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to merge branch {name!r} into {target_base!r}: {exc}"
) from exc
# 8) Push updated base
try:
run_git(["push", "origin", target_base], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to push base branch {target_base!r} after merge: {exc}"
) from exc
# 9) Delete branch locally
try:
run_git(["branch", "-d", name], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to delete local branch {name!r}: {exc}"
) from exc
# 10) Delete branch on origin (best effort)
try:
run_git(["push", "origin", "--delete", name], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Branch {name!r} was deleted locally, but remote deletion failed: {exc}"
) from exc
__all__ = [
"open_branch",
"close_branch",
"drop_branch",
]

View File

@@ -0,0 +1,100 @@
from __future__ import annotations
from typing import Optional
from pkgmgr.core.git import run_git, GitError, get_current_branch
from .utils import _resolve_base_branch
def close_branch(
name: Optional[str],
base_branch: str = "main",
fallback_base: str = "master",
cwd: str = ".",
force: bool = False,
) -> None:
"""
Merge a feature branch into the base branch and delete it afterwards.
"""
# Determine branch name
if not name:
try:
name = get_current_branch(cwd=cwd)
except GitError as exc:
raise RuntimeError(f"Failed to detect current branch: {exc}") from exc
if not name:
raise RuntimeError("Branch name must not be empty.")
target_base = _resolve_base_branch(base_branch, fallback_base, cwd=cwd)
if name == target_base:
raise RuntimeError(
f"Refusing to close base branch {target_base!r}. "
"Please specify a feature branch."
)
# Confirmation
if not force:
answer = input(
f"Merge branch '{name}' into '{target_base}' and delete it afterwards? (y/N): "
).strip().lower()
if answer != "y":
print("Aborted closing branch.")
return
# Fetch
try:
run_git(["fetch", "origin"], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to fetch from origin before closing branch {name!r}: {exc}"
) from exc
# Checkout base
try:
run_git(["checkout", target_base], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to checkout base branch {target_base!r}: {exc}"
) from exc
# Pull latest
try:
run_git(["pull", "origin", target_base], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to pull latest changes for base branch {target_base!r}: {exc}"
) from exc
# Merge
try:
run_git(["merge", "--no-ff", name], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to merge branch {name!r} into {target_base!r}: {exc}"
) from exc
# Push result
try:
run_git(["push", "origin", target_base], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to push base branch {target_base!r} after merge: {exc}"
) from exc
# Delete local
try:
run_git(["branch", "-d", name], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to delete local branch {name!r}: {exc}"
) from exc
# Delete remote
try:
run_git(["push", "origin", "--delete", name], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Branch {name!r} deleted locally, but remote deletion failed: {exc}"
) from exc

View File

@@ -0,0 +1,56 @@
from __future__ import annotations
from typing import Optional
from pkgmgr.core.git import run_git, GitError, get_current_branch
from .utils import _resolve_base_branch
def drop_branch(
name: Optional[str],
base_branch: str = "main",
fallback_base: str = "master",
cwd: str = ".",
force: bool = False,
) -> None:
"""
Delete a branch locally and remotely without merging.
"""
if not name:
try:
name = get_current_branch(cwd=cwd)
except GitError as exc:
raise RuntimeError(f"Failed to detect current branch: {exc}") from exc
if not name:
raise RuntimeError("Branch name must not be empty.")
target_base = _resolve_base_branch(base_branch, fallback_base, cwd=cwd)
if name == target_base:
raise RuntimeError(
f"Refusing to drop base branch {target_base!r}. It cannot be deleted."
)
# Confirmation
if not force:
answer = input(
f"Delete branch '{name}' locally and on origin? This is destructive! (y/N): "
).strip().lower()
if answer != "y":
print("Aborted dropping branch.")
return
# Local delete
try:
run_git(["branch", "-d", name], cwd=cwd)
except GitError as exc:
raise RuntimeError(f"Failed to delete local branch {name!r}: {exc}") from exc
# Remote delete
try:
run_git(["push", "origin", "--delete", name], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Branch {name!r} was deleted locally, but remote deletion failed: {exc}"
) from exc

View File

@@ -0,0 +1,65 @@
from __future__ import annotations
from typing import Optional
from pkgmgr.core.git import run_git, GitError
from .utils import _resolve_base_branch
def open_branch(
name: Optional[str],
base_branch: str = "main",
fallback_base: str = "master",
cwd: str = ".",
) -> None:
"""
Create and push a new feature branch on top of a base branch.
"""
# Request name interactively if not provided
if not name:
name = input("Enter new branch name: ").strip()
if not name:
raise RuntimeError("Branch name must not be empty.")
resolved_base = _resolve_base_branch(base_branch, fallback_base, cwd=cwd)
# 1) Fetch from origin
try:
run_git(["fetch", "origin"], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to fetch from origin before creating branch {name!r}: {exc}"
) from exc
# 2) Checkout base branch
try:
run_git(["checkout", resolved_base], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to checkout base branch {resolved_base!r}: {exc}"
) from exc
# 3) Pull latest changes
try:
run_git(["pull", "origin", resolved_base], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to pull latest changes for base branch {resolved_base!r}: {exc}"
) from exc
# 4) Create new branch
try:
run_git(["checkout", "-b", name], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to create new branch {name!r} from base {resolved_base!r}: {exc}"
) from exc
# 5) Push new branch
try:
run_git(["push", "-u", "origin", name], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to push new branch {name!r} to origin: {exc}"
) from exc

View File

@@ -0,0 +1,27 @@
from __future__ import annotations
from pkgmgr.core.git import run_git, GitError
def _resolve_base_branch(
preferred: str,
fallback: str,
cwd: str,
) -> str:
"""
Resolve the base branch to use.
Try `preferred` first (default: main),
fall back to `fallback` (default: master).
Raise RuntimeError if neither exists.
"""
for candidate in (preferred, fallback):
try:
run_git(["rev-parse", "--verify", candidate], cwd=cwd)
return candidate
except GitError:
continue
raise RuntimeError(
f"Neither {preferred!r} nor {fallback!r} exist in this repository."
)

View File

@@ -139,22 +139,27 @@ class NixFlakeInstaller(BaseInstaller):
for output, allow_failure in outputs:
cmd = f"nix profile install {ctx.repo_dir}#{output}"
print(f"[INFO] Running: {cmd}")
ret = os.system(cmd)
try:
run_command(
cmd,
cwd=ctx.repo_dir,
preview=ctx.preview,
allow_failure=allow_failure,
)
# Extract real exit code from os.system() result
if os.WIFEXITED(ret):
exit_code = os.WEXITSTATUS(ret)
else:
# abnormal termination (signal etc.) keep raw value
exit_code = ret
if exit_code == 0:
print(f"Nix flake output '{output}' successfully installed.")
except SystemExit as e:
print(f"[Error] Failed to install Nix flake output '{output}': {e}")
if not allow_failure:
# Mandatory output failed → fatal for the pipeline.
raise
# Optional output failed → log and continue.
print(
"[Warning] Continuing despite failure to install "
f"optional output '{output}'."
)
continue
print(f"[Error] Failed to install Nix flake output '{output}'")
print(f"[Error] Command exited with code {exit_code}")
if not allow_failure:
raise SystemExit(exit_code)
print(
"[Warning] Continuing despite failure to install "
f"optional output '{output}'."
)

View File

@@ -0,0 +1,26 @@
from __future__ import annotations
"""
High-level mirror actions.
Public API:
- list_mirrors
- diff_mirrors
- merge_mirrors
- setup_mirrors
"""
from .types import Repository, MirrorMap
from .list_cmd import list_mirrors
from .diff_cmd import diff_mirrors
from .merge_cmd import merge_mirrors
from .setup_cmd import setup_mirrors
__all__ = [
"Repository",
"MirrorMap",
"list_mirrors",
"diff_mirrors",
"merge_mirrors",
"setup_mirrors",
]

View File

@@ -0,0 +1,31 @@
from __future__ import annotations
from typing import List
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.repository.identifier import get_repo_identifier
from .io import load_config_mirrors, read_mirrors_file
from .types import MirrorMap, RepoMirrorContext, Repository
def build_context(
repo: Repository,
repositories_base_dir: str,
all_repos: List[Repository],
) -> RepoMirrorContext:
"""
Build a RepoMirrorContext for a single repository.
"""
identifier = get_repo_identifier(repo, all_repos)
repo_dir = get_repo_dir(repositories_base_dir, repo)
config_mirrors: MirrorMap = load_config_mirrors(repo)
file_mirrors: MirrorMap = read_mirrors_file(repo_dir)
return RepoMirrorContext(
identifier=identifier,
repo_dir=repo_dir,
config_mirrors=config_mirrors,
file_mirrors=file_mirrors,
)

View File

@@ -0,0 +1,60 @@
from __future__ import annotations
from typing import List
from .context import build_context
from .printing import print_header
from .types import Repository
def diff_mirrors(
selected_repos: List[Repository],
repositories_base_dir: str,
all_repos: List[Repository],
) -> None:
"""
Show differences between config mirrors and MIRRORS file.
- Mirrors present only in config are reported as "ONLY IN CONFIG".
- Mirrors present only in MIRRORS file are reported as "ONLY IN FILE".
- Mirrors with same name but different URLs are reported as "URL MISMATCH".
"""
for repo in selected_repos:
ctx = build_context(repo, repositories_base_dir, all_repos)
print_header("[MIRROR DIFF]", ctx)
config_m = ctx.config_mirrors
file_m = ctx.file_mirrors
if not config_m and not file_m:
print(" No mirrors configured in config or MIRRORS file.")
print()
continue
# Mirrors only in config
for name, url in sorted(config_m.items()):
if name not in file_m:
print(f" [ONLY IN CONFIG] {name}: {url}")
# Mirrors only in MIRRORS file
for name, url in sorted(file_m.items()):
if name not in config_m:
print(f" [ONLY IN FILE] {name}: {url}")
# Mirrors with same name but different URLs
shared = set(config_m) & set(file_m)
for name in sorted(shared):
url_cfg = config_m.get(name)
url_file = file_m.get(name)
if url_cfg != url_file:
print(
f" [URL MISMATCH] {name}:\n"
f" config: {url_cfg}\n"
f" file: {url_file}"
)
if config_m and file_m and config_m == file_m:
print(" [OK] Mirrors in config and MIRRORS file are in sync.")
print()

View File

@@ -0,0 +1,179 @@
from __future__ import annotations
import os
from typing import List, Optional, Set
from pkgmgr.core.command.run import run_command
from pkgmgr.core.git import GitError, run_git
from .types import MirrorMap, RepoMirrorContext, Repository
def build_default_ssh_url(repo: Repository) -> Optional[str]:
"""
Build a simple SSH URL from repo config if no explicit mirror is defined.
Example: git@github.com:account/repository.git
"""
provider = repo.get("provider")
account = repo.get("account")
name = repo.get("repository")
port = repo.get("port")
if not provider or not account or not name:
return None
provider = str(provider)
account = str(account)
name = str(name)
if port:
return f"ssh://git@{provider}:{port}/{account}/{name}.git"
# GitHub-style shorthand
return f"git@{provider}:{account}/{name}.git"
def determine_primary_remote_url(
repo: Repository,
resolved_mirrors: MirrorMap,
) -> Optional[str]:
"""
Determine the primary remote URL in a consistent way:
1. resolved_mirrors["origin"]
2. any resolved mirror (first by name)
3. default SSH URL from provider/account/repository
"""
if "origin" in resolved_mirrors:
return resolved_mirrors["origin"]
if resolved_mirrors:
first_name = sorted(resolved_mirrors.keys())[0]
return resolved_mirrors[first_name]
return build_default_ssh_url(repo)
def _safe_git_output(args: List[str], cwd: str) -> Optional[str]:
"""
Run a Git command via run_git and return its stdout, or None on failure.
"""
try:
return run_git(args, cwd=cwd)
except GitError:
return None
def current_origin_url(repo_dir: str) -> Optional[str]:
"""
Return the current URL for remote 'origin', or None if not present.
"""
output = _safe_git_output(["remote", "get-url", "origin"], cwd=repo_dir)
if not output:
return None
url = output.strip()
return url or None
def has_origin_remote(repo_dir: str) -> bool:
"""
Check whether a remote called 'origin' exists in the repository.
"""
output = _safe_git_output(["remote"], cwd=repo_dir)
if not output:
return False
names = output.split()
return "origin" in names
def _ensure_push_urls_for_origin(
repo_dir: str,
mirrors: MirrorMap,
preview: bool,
) -> None:
"""
Ensure that all mirror URLs are present as push URLs on 'origin'.
"""
desired: Set[str] = {url for url in mirrors.values() if url}
if not desired:
return
existing_output = _safe_git_output(
["remote", "get-url", "--push", "--all", "origin"],
cwd=repo_dir,
)
existing = set(existing_output.splitlines()) if existing_output else set()
missing = sorted(desired - existing)
for url in missing:
cmd = f"git remote set-url --add --push origin {url}"
if preview:
print(f"[PREVIEW] Would run in {repo_dir!r}: {cmd}")
else:
print(f"[INFO] Adding push URL to 'origin': {url}")
run_command(cmd, cwd=repo_dir, preview=False)
def ensure_origin_remote(
repo: Repository,
ctx: RepoMirrorContext,
preview: bool,
) -> None:
"""
Ensure that a usable 'origin' remote exists and has all push URLs.
"""
repo_dir = ctx.repo_dir
resolved_mirrors = ctx.resolved_mirrors
if not os.path.isdir(os.path.join(repo_dir, ".git")):
print(f"[WARN] {repo_dir} is not a Git repository (no .git directory).")
return
url = determine_primary_remote_url(repo, resolved_mirrors)
if not has_origin_remote(repo_dir):
if not url:
print(
"[WARN] Could not determine URL for 'origin' remote. "
"Please configure mirrors or provider/account/repository."
)
return
cmd = f"git remote add origin {url}"
if preview:
print(f"[PREVIEW] Would run in {repo_dir!r}: {cmd}")
else:
print(f"[INFO] Adding 'origin' remote in {repo_dir}: {url}")
run_command(cmd, cwd=repo_dir, preview=False)
else:
current = current_origin_url(repo_dir)
if current == url or not url:
print(
f"[INFO] 'origin' already points to "
f"{current or '<unknown>'} (no change needed)."
)
else:
# We do not auto-change origin here, only log the mismatch.
print(
"[INFO] 'origin' exists with URL "
f"{current or '<unknown>'}; not changing to {url}."
)
# Ensure all mirrors are present as push URLs
_ensure_push_urls_for_origin(repo_dir, resolved_mirrors, preview)
def is_remote_reachable(url: str, cwd: Optional[str] = None) -> bool:
"""
Check whether a remote repository is reachable via `git ls-remote`.
This does NOT modify anything; it only probes the remote.
"""
workdir = cwd or os.getcwd()
try:
# --exit-code → non-zero exit code if the remote does not exist
run_git(["ls-remote", "--exit-code", url], cwd=workdir)
return True
except GitError:
return False

View File

@@ -0,0 +1,98 @@
from __future__ import annotations
import os
from urllib.parse import urlparse
from typing import List, Mapping
from .types import MirrorMap, Repository
def load_config_mirrors(repo: Repository) -> MirrorMap:
mirrors = repo.get("mirrors") or {}
result: MirrorMap = {}
if isinstance(mirrors, dict):
for name, url in mirrors.items():
if url:
result[str(name)] = str(url)
return result
if isinstance(mirrors, list):
for entry in mirrors:
if isinstance(entry, dict):
name = entry.get("name")
url = entry.get("url")
if name and url:
result[str(name)] = str(url)
return result
def read_mirrors_file(repo_dir: str, filename: str = "MIRRORS") -> MirrorMap:
"""
Supports:
NAME URL
URL → auto name = hostname
"""
path = os.path.join(repo_dir, filename)
mirrors: MirrorMap = {}
if not os.path.exists(path):
return mirrors
try:
with open(path, "r", encoding="utf-8") as fh:
for line in fh:
stripped = line.strip()
if not stripped or stripped.startswith("#"):
continue
parts = stripped.split(None, 1)
# Case 1: "name url"
if len(parts) == 2:
name, url = parts
# Case 2: "url" → auto-generate name
elif len(parts) == 1:
url = parts[0]
parsed = urlparse(url)
host = (parsed.netloc or "").split(":")[0]
base = host or "mirror"
name = base
i = 2
while name in mirrors:
name = f"{base}{i}"
i += 1
else:
continue
mirrors[name] = url
except OSError as exc:
print(f"[WARN] Could not read MIRRORS file at {path}: {exc}")
return mirrors
def write_mirrors_file(
repo_dir: str,
mirrors: Mapping[str, str],
filename: str = "MIRRORS",
preview: bool = False,
) -> None:
path = os.path.join(repo_dir, filename)
lines = [f"{name} {url}" for name, url in sorted(mirrors.items())]
content = "\n".join(lines) + ("\n" if lines else "")
if preview:
print(f"[PREVIEW] Would write MIRRORS file at {path}:")
print(content or "(empty)")
return
try:
os.makedirs(os.path.dirname(path), exist_ok=True)
with open(path, "w", encoding="utf-8") as fh:
fh.write(content)
print(f"[INFO] Wrote MIRRORS file at {path}")
except OSError as exc:
print(f"[ERROR] Failed to write MIRRORS file at {path}: {exc}")

View File

@@ -0,0 +1,46 @@
from __future__ import annotations
from typing import List
from .context import build_context
from .printing import print_header, print_named_mirrors
from .types import Repository
def list_mirrors(
selected_repos: List[Repository],
repositories_base_dir: str,
all_repos: List[Repository],
source: str = "all",
) -> None:
"""
List mirrors for the selected repositories.
source:
- "config" → only mirrors from configuration
- "file" → only mirrors from MIRRORS file
- "resolved" → merged view (config + file, file wins)
- "all" → show config + file + resolved
"""
for repo in selected_repos:
ctx = build_context(repo, repositories_base_dir, all_repos)
resolved_m = ctx.resolved_mirrors
print_header("[MIRROR]", ctx)
if source in ("config", "all"):
print_named_mirrors("config mirrors", ctx.config_mirrors)
if source == "config":
print()
continue # next repo
if source in ("file", "all"):
print_named_mirrors("MIRRORS file", ctx.file_mirrors)
if source == "file":
print()
continue # next repo
if source in ("resolved", "all"):
print_named_mirrors("resolved mirrors", resolved_m)
print()

View File

@@ -0,0 +1,162 @@
from __future__ import annotations
import os
from typing import Dict, List, Tuple, Optional
import yaml
from pkgmgr.core.config.save import save_user_config
from .context import build_context
from .io import write_mirrors_file
from .types import MirrorMap, Repository
# -----------------------------------------------------------------------------
# Helpers
# -----------------------------------------------------------------------------
def _repo_key(repo: Repository) -> Tuple[str, str, str]:
"""
Normalised key for identifying a repository in config files.
"""
return (
str(repo.get("provider", "")),
str(repo.get("account", "")),
str(repo.get("repository", "")),
)
def _load_user_config(path: str) -> Dict[str, object]:
"""
Load a user config YAML file as dict.
Non-dicts yield {}.
"""
if not os.path.exists(path):
return {}
try:
with open(path, "r", encoding="utf-8") as f:
data = yaml.safe_load(f) or {}
return data if isinstance(data, dict) else {}
except Exception:
return {}
# -----------------------------------------------------------------------------
# Main merge command
# -----------------------------------------------------------------------------
def merge_mirrors(
selected_repos: List[Repository],
repositories_base_dir: str,
all_repos: List[Repository],
source: str,
target: str,
preview: bool = False,
user_config_path: Optional[str] = None,
) -> None:
"""
Merge mirrors between config and MIRRORS file.
Rules:
- source, target ∈ {"config", "file"}.
- merged = (target_mirrors overridden by source_mirrors)
- If target == "file" → write MIRRORS file.
- If target == "config":
* update the user config YAML directly
* write it using save_user_config()
The merge strategy is:
dst + src (src wins on same name)
"""
# Load user config once if we intend to write to it.
user_cfg: Optional[Dict[str, object]] = None
user_cfg_path_expanded: Optional[str] = None
if target == "config" and user_config_path and not preview:
user_cfg_path_expanded = os.path.expanduser(user_config_path)
user_cfg = _load_user_config(user_cfg_path_expanded)
if not isinstance(user_cfg.get("repositories"), list):
user_cfg["repositories"] = []
for repo in selected_repos:
ctx = build_context(repo, repositories_base_dir, all_repos)
print("============================================================")
print(f"[MIRROR MERGE] Repository: {ctx.identifier}")
print(f"[MIRROR MERGE] Directory: {ctx.repo_dir}")
print(f"[MIRROR MERGE] {source}{target}")
print("============================================================")
# Pick the correct source/target maps
if source == "config":
src = ctx.config_mirrors
dst = ctx.file_mirrors
else: # source == "file"
src = ctx.file_mirrors
dst = ctx.config_mirrors
# Merge (src overrides dst)
merged: MirrorMap = dict(dst)
merged.update(src)
# ---------------------------------------------------------
# WRITE TO FILE
# ---------------------------------------------------------
if target == "file":
write_mirrors_file(ctx.repo_dir, merged, preview=preview)
print()
continue
# ---------------------------------------------------------
# WRITE TO CONFIG
# ---------------------------------------------------------
if target == "config":
# If preview or no config path → show intended output
if preview or not user_cfg:
print("[INFO] The following mirrors would be written to config:")
if not merged:
print(" (no mirrors)")
else:
for name, url in sorted(merged.items()):
print(f" - {name}: {url}")
print(" (Config not modified due to preview or missing path.)")
print()
continue
repos = user_cfg.get("repositories")
target_key = _repo_key(repo)
existing_repo: Optional[Repository] = None
# Find existing repo entry
for entry in repos:
if isinstance(entry, dict) and _repo_key(entry) == target_key:
existing_repo = entry
break
# Create entry if missing
if existing_repo is None:
existing_repo = {
"provider": repo.get("provider"),
"account": repo.get("account"),
"repository": repo.get("repository"),
}
repos.append(existing_repo)
# Write or delete mirrors
if merged:
existing_repo["mirrors"] = dict(merged)
else:
existing_repo.pop("mirrors", None)
print(" [OK] Updated repo['mirrors'] in user config.")
print()
# -------------------------------------------------------------
# SAVE CONFIG (once at the end)
# -------------------------------------------------------------
if user_cfg is not None and user_cfg_path_expanded is not None and not preview:
save_user_config(user_cfg, user_cfg_path_expanded)
print(f"[OK] Saved updated config: {user_cfg_path_expanded}")

View File

@@ -0,0 +1,35 @@
from __future__ import annotations
from .types import MirrorMap, RepoMirrorContext
def print_header(
title_prefix: str,
ctx: RepoMirrorContext,
) -> None:
"""
Print a standard header for mirror-related output.
title_prefix examples:
- "[MIRROR]"
- "[MIRROR DIFF]"
- "[MIRROR MERGE]"
- "[MIRROR SETUP:LOCAL]"
- "[MIRROR SETUP:REMOTE]"
"""
print("============================================================")
print(f"{title_prefix} Repository: {ctx.identifier}")
print(f"{title_prefix} Directory: {ctx.repo_dir}")
print("============================================================")
def print_named_mirrors(label: str, mirrors: MirrorMap) -> None:
"""
Print a labeled mirror block (e.g. '[config mirrors]').
"""
print(f" [{label}]")
if mirrors:
for name, url in sorted(mirrors.items()):
print(f" - {name}: {url}")
else:
print(" (none)")

View File

@@ -0,0 +1,165 @@
from __future__ import annotations
from typing import List, Tuple
from pkgmgr.core.git import run_git, GitError
from .context import build_context
from .git_remote import determine_primary_remote_url, ensure_origin_remote
from .types import Repository
def _setup_local_mirrors_for_repo(
repo: Repository,
repositories_base_dir: str,
all_repos: List[Repository],
preview: bool,
) -> None:
"""
Ensure local Git state is sane (currently: 'origin' remote).
"""
ctx = build_context(repo, repositories_base_dir, all_repos)
print("------------------------------------------------------------")
print(f"[MIRROR SETUP:LOCAL] {ctx.identifier}")
print(f"[MIRROR SETUP:LOCAL] dir: {ctx.repo_dir}")
print("------------------------------------------------------------")
ensure_origin_remote(repo, ctx, preview=preview)
print()
def _probe_mirror(url: str, repo_dir: str) -> Tuple[bool, str]:
"""
Probe a remote mirror by running `git ls-remote <url>`.
Returns:
(True, "") on success,
(False, error_message) on failure.
Wichtig:
- Wir werten ausschließlich den Exit-Code aus.
- STDERR kann Hinweise/Warnings enthalten und ist NICHT automatisch ein Fehler.
"""
try:
# Wir ignorieren stdout komplett; wichtig ist nur, dass der Befehl ohne
# GitError (also Exit-Code 0) durchläuft.
run_git(["ls-remote", url], cwd=repo_dir)
return True, ""
except GitError as exc:
return False, str(exc)
def _setup_remote_mirrors_for_repo(
repo: Repository,
repositories_base_dir: str,
all_repos: List[Repository],
preview: bool,
) -> None:
"""
Remote-side setup / validation.
Aktuell werden nur **nicht-destruktive Checks** gemacht:
- Für jeden Mirror (aus config + MIRRORS-Datei, file gewinnt):
* `git ls-remote <url>` wird ausgeführt.
* Bei Exit-Code 0 → [OK]
* Bei Fehler → [WARN] + Details aus der GitError-Exception
Es werden **keine** Provider-APIs aufgerufen und keine Repos angelegt.
"""
ctx = build_context(repo, repositories_base_dir, all_repos)
resolved_m = ctx.resolved_mirrors
print("------------------------------------------------------------")
print(f"[MIRROR SETUP:REMOTE] {ctx.identifier}")
print(f"[MIRROR SETUP:REMOTE] dir: {ctx.repo_dir}")
print("------------------------------------------------------------")
if not resolved_m:
# Optional: Fallback auf eine heuristisch bestimmte URL, falls wir
# irgendwann "automatisch anlegen" implementieren wollen.
primary_url = determine_primary_remote_url(repo, resolved_m)
if not primary_url:
print(
"[INFO] No mirrors configured (config or MIRRORS file), and no "
"primary URL could be derived from provider/account/repository."
)
print()
return
ok, error_message = _probe_mirror(primary_url, ctx.repo_dir)
if ok:
print(f"[OK] Remote mirror (primary) is reachable: {primary_url}")
else:
print("[WARN] Primary remote URL is NOT reachable:")
print(f" {primary_url}")
if error_message:
print(" Details:")
for line in error_message.splitlines():
print(f" {line}")
print()
print(
"[INFO] Remote checks are non-destructive and only use `git ls-remote` "
"to probe mirror URLs."
)
print()
return
# Normaler Fall: wir haben benannte Mirrors aus config/MIRRORS
for name, url in sorted(resolved_m.items()):
ok, error_message = _probe_mirror(url, ctx.repo_dir)
if ok:
print(f"[OK] Remote mirror '{name}' is reachable: {url}")
else:
print(f"[WARN] Remote mirror '{name}' is NOT reachable:")
print(f" {url}")
if error_message:
print(" Details:")
for line in error_message.splitlines():
print(f" {line}")
print()
print(
"[INFO] Remote checks are non-destructive and only use `git ls-remote` "
"to probe mirror URLs."
)
print()
def setup_mirrors(
selected_repos: List[Repository],
repositories_base_dir: str,
all_repos: List[Repository],
preview: bool = False,
local: bool = True,
remote: bool = True,
) -> None:
"""
Setup mirrors for the selected repositories.
local:
- Configure local Git remotes (currently: ensure 'origin' is present and
points to a reasonable URL).
remote:
- Non-destructive remote checks using `git ls-remote` for each mirror URL.
Es werden keine Repositories auf dem Provider angelegt.
"""
for repo in selected_repos:
if local:
_setup_local_mirrors_for_repo(
repo,
repositories_base_dir=repositories_base_dir,
all_repos=all_repos,
preview=preview,
)
if remote:
_setup_remote_mirrors_for_repo(
repo,
repositories_base_dir=repositories_base_dir,
all_repos=all_repos,
preview=preview,
)

View File

@@ -0,0 +1,32 @@
from __future__ import annotations
from dataclasses import dataclass
from typing import Any, Dict
Repository = Dict[str, Any]
MirrorMap = Dict[str, str]
@dataclass(frozen=True)
class RepoMirrorContext:
"""
Bundle mirror-related information for a single repository.
"""
identifier: str
repo_dir: str
config_mirrors: MirrorMap
file_mirrors: MirrorMap
@property
def resolved_mirrors(self) -> MirrorMap:
"""
Combined mirrors from config and MIRRORS file.
Strategy:
- Start from config mirrors
- Overlay MIRRORS file (file wins on same name)
"""
merged: MirrorMap = dict(self.config_mirrors)
merged.update(self.file_mirrors)
return merged

View File

@@ -0,0 +1,218 @@
# Release Action
This module implements the `pkgmgr release` workflow.
It provides a controlled, reproducible release process that:
- bumps the project version
- updates all supported packaging formats
- creates and pushes Git tags
- optionally maintains a floating `latest` tag
- optionally closes the current branch
The implementation is intentionally explicit and conservative to avoid
accidental releases or broken Git states.
---
## What the Release Command Does
A release performs the following high-level steps:
1. Synchronize the current branch with its upstream (fast-forward only)
2. Determine the next semantic version
3. Update all versioned files
4. Commit the release
5. Create and push a version tag
6. Optionally update and push the floating `latest` tag
7. Optionally close the current branch
All steps support **preview (dry-run)** mode.
---
## Supported Files Updated During a Release
If present, the following files are updated automatically:
- `pyproject.toml`
- `CHANGELOG.md`
- `flake.nix`
- `PKGBUILD`
- `package-manager.spec`
- `debian/changelog`
Missing files are skipped gracefully.
---
## Git Safety Rules
The release workflow enforces strict Git safety guarantees:
- A `git pull --ff-only` is executed **before any file modifications**
- No merge commits are ever created automatically
- Only the current branch and the newly created version tag are pushed
- `git push --tags` is intentionally **not** used
- The floating `latest` tag is force-pushed only when required
---
## Semantic Versioning
The next version is calculated from existing Git tags:
- Tags must follow the format `vX.Y.Z`
- The release type controls the version bump:
- `patch`
- `minor`
- `major`
The new tag is always created as an **annotated tag**.
---
## Floating `latest` Tag
The floating `latest` tag is handled explicitly:
- `latest` is updated **only if** the new version is the highest existing version
- Version comparison uses natural version sorting (`sort -V`)
- `latest` always points to the commit behind the version tag
- Updating `latest` uses a forced push by design
This guarantees that `latest` always represents the highest released version,
never an older release.
---
## Preview Mode
Preview mode (`--preview`) performs a full dry-run:
- No files are modified
- No Git commands are executed
- All intended actions are printed
Example preview output includes:
- version bump
- file updates
- commit message
- tag creation
- branch and tag pushes
- `latest` update (if applicable)
---
## Interactive vs Forced Mode
### Interactive (default)
1. Run a preview
2. Ask for confirmation
3. Execute the real release
### Forced (`--force`)
- Skips preview and confirmation
- Skips branch deletion prompts
- Executes the release immediately
---
## Branch Closing (`--close`)
When `--close` is enabled:
- `main` and `master` are **never** deleted
- Other branches:
- prompt for confirmation (`y/N`)
- can be skipped using `--force`
- Branch deletion happens **only after** a successful release
---
## Execution Flow (ASCII Diagram)
```
+---------------------+
| pkgmgr release |
+----------+----------+
|
v
+---------------------+
| Detect branch |
+----------+----------+
|
v
+------------------------------+
| git fetch / pull --ff-only |
+----------+-------------------+
|
v
+------------------------------+
| Determine next version |
+----------+-------------------+
|
v
+------------------------------+
| Update versioned files |
+----------+-------------------+
|
v
+------------------------------+
| Commit release |
+----------+-------------------+
|
v
+------------------------------+
| Create version tag (vX.Y.Z) |
+----------+-------------------+
|
v
+------------------------------+
| Push branch + version tag |
+----------+-------------------+
|
v
+---------------------------------------+
| Is this the highest version? |
+----------+----------------------------+
|
yes | no
|
v
+------------------------------+ +----------------------+
| Update & push `latest` tag | | Skip `latest` update |
+----------+-------------------+ +----------------------+
|
v
+------------------------------+
| Close branch (optional) |
+------------------------------+
```
---
## Design Goals
- Deterministic and reproducible releases
- No implicit Git side effects
- Explicit tag handling
- Safe defaults for interactive usage
- Automation-friendly forced mode
- Clear separation of concerns:
- `workflow.py` orchestration
- `git_ops.py` Git operations
- `prompts.py` user interaction
- `versioning.py` SemVer logic
---
## Summary
`pkgmgr release` is a **deliberately strict** release mechanism.
It trades convenience for safety, traceability, and correctness — making it
suitable for both interactive development workflows and fully automated CI/CD

View File

@@ -1,310 +1,5 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Release helper for pkgmgr (public entry point).
This package provides the high-level `release()` function used by the
pkgmgr CLI to perform versioned releases:
- Determine the next semantic version based on existing Git tags.
- Update pyproject.toml with the new version.
- Update additional packaging files (flake.nix, PKGBUILD,
debian/changelog, RPM spec) where present.
- Prepend a basic entry to CHANGELOG.md.
- Move the floating 'latest' tag to the newly created release tag so
the newest release is always marked as latest.
Additional behaviour:
- If `preview=True` (from --preview), no files are written and no
Git commands are executed. Instead, a detailed summary of the
planned changes and commands is printed.
- If `preview=False` and not forced, the release is executed in two
phases:
1) Preview-only run (dry-run).
2) Interactive confirmation, then real release if confirmed.
This confirmation can be skipped with the `force=True` flag.
- Before creating and pushing tags, main/master is updated from origin
when the release is performed on one of these branches.
- If `close=True` is used and the current branch is not main/master,
the branch will be closed via branch_commands.close_branch() after
a successful release.
"""
from __future__ import annotations
import os
import sys
from typing import Optional
from pkgmgr.core.git import get_current_branch, GitError
from pkgmgr.actions.branch import close_branch
from .versioning import determine_current_version, bump_semver
from .git_ops import run_git_command, sync_branch_with_remote, update_latest_tag
from .files import (
update_pyproject_version,
update_flake_version,
update_pkgbuild_version,
update_spec_version,
update_changelog,
update_debian_changelog,
update_spec_changelog,
)
# ---------------------------------------------------------------------------
# Internal implementation (single-phase, preview or real)
# ---------------------------------------------------------------------------
def _release_impl(
pyproject_path: str = "pyproject.toml",
changelog_path: str = "CHANGELOG.md",
release_type: str = "patch",
message: Optional[str] = None,
preview: bool = False,
close: bool = False,
) -> None:
"""
Internal implementation that performs a single-phase release.
"""
current_ver = determine_current_version()
new_ver = bump_semver(current_ver, release_type)
new_ver_str = str(new_ver)
new_tag = new_ver.to_tag(with_prefix=True)
mode = "PREVIEW" if preview else "REAL"
print(f"Release mode: {mode}")
print(f"Current version: {current_ver}")
print(f"New version: {new_ver_str} ({release_type})")
repo_root = os.path.dirname(os.path.abspath(pyproject_path))
# Update core project metadata and packaging files
update_pyproject_version(pyproject_path, new_ver_str, preview=preview)
changelog_message = update_changelog(
changelog_path,
new_ver_str,
message=message,
preview=preview,
)
flake_path = os.path.join(repo_root, "flake.nix")
update_flake_version(flake_path, new_ver_str, preview=preview)
pkgbuild_path = os.path.join(repo_root, "PKGBUILD")
update_pkgbuild_version(pkgbuild_path, new_ver_str, preview=preview)
spec_path = os.path.join(repo_root, "package-manager.spec")
update_spec_version(spec_path, new_ver_str, preview=preview)
# Determine a single effective_message to be reused across all
# changelog targets (project, Debian, Fedora).
effective_message: Optional[str] = message
if effective_message is None and isinstance(changelog_message, str):
if changelog_message.strip():
effective_message = changelog_message.strip()
debian_changelog_path = os.path.join(repo_root, "debian", "changelog")
package_name = os.path.basename(repo_root) or "package-manager"
# Debian changelog
update_debian_changelog(
debian_changelog_path,
package_name=package_name,
new_version=new_ver_str,
message=effective_message,
preview=preview,
)
# Fedora / RPM %changelog
update_spec_changelog(
spec_path=spec_path,
package_name=package_name,
new_version=new_ver_str,
message=effective_message,
preview=preview,
)
commit_msg = f"Release version {new_ver_str}"
tag_msg = effective_message or commit_msg
# Determine branch and ensure it is up to date if main/master
try:
branch = get_current_branch() or "main"
except GitError:
branch = "main"
print(f"Releasing on branch: {branch}")
# Ensure main/master are up-to-date from origin before creating and
# pushing tags. For other branches we only log the intent.
sync_branch_with_remote(branch, preview=preview)
files_to_add = [
pyproject_path,
changelog_path,
flake_path,
pkgbuild_path,
spec_path,
debian_changelog_path,
]
existing_files = [p for p in files_to_add if p and os.path.exists(p)]
if preview:
for path in existing_files:
print(f"[PREVIEW] Would run: git add {path}")
print(f'[PREVIEW] Would run: git commit -am "{commit_msg}"')
print(f'[PREVIEW] Would run: git tag -a {new_tag} -m "{tag_msg}"')
print(f"[PREVIEW] Would run: git push origin {branch}")
print("[PREVIEW] Would run: git push origin --tags")
# Also update the floating 'latest' tag to the new highest SemVer.
update_latest_tag(new_tag, preview=True)
if close and branch not in ("main", "master"):
print(
f"[PREVIEW] Would also close branch {branch} after the release "
"(close=True and branch is not main/master)."
)
elif close:
print(
f"[PREVIEW] close=True but current branch is {branch}; "
"no branch would be closed."
)
print("Preview completed. No changes were made.")
return
for path in existing_files:
run_git_command(f"git add {path}")
run_git_command(f'git commit -am "{commit_msg}"')
run_git_command(f'git tag -a {new_tag} -m "{tag_msg}"')
run_git_command(f"git push origin {branch}")
run_git_command("git push origin --tags")
# Move 'latest' to the new release tag so the newest SemVer is always
# marked as latest. This is best-effort and must not break the release.
try:
update_latest_tag(new_tag, preview=False)
except GitError as exc: # pragma: no cover
print(
f"[WARN] Failed to update floating 'latest' tag for {new_tag}: {exc}\n"
"[WARN] The release itself completed successfully; only the "
"'latest' tag was not updated."
)
print(f"Release {new_ver_str} completed.")
if close:
if branch in ("main", "master"):
print(
f"[INFO] close=True but current branch is {branch}; "
"nothing to close."
)
return
print(
f"[INFO] Closing branch {branch} after successful release "
"(close=True and branch is not main/master)..."
)
try:
close_branch(name=branch, base_branch="main", cwd=".")
except Exception as exc: # pragma: no cover
print(f"[WARN] Failed to close branch {branch} automatically: {exc}")
# ---------------------------------------------------------------------------
# Public release entry point
# ---------------------------------------------------------------------------
def release(
pyproject_path: str = "pyproject.toml",
changelog_path: str = "CHANGELOG.md",
release_type: str = "patch",
message: Optional[str] = None,
preview: bool = False,
force: bool = False,
close: bool = False,
) -> None:
"""
High-level release entry point.
Modes:
- preview=True:
* Single-phase PREVIEW only.
- preview=False, force=True:
* Single-phase REAL release, no interactive preview.
- preview=False, force=False:
* Two-phase flow (intended default for interactive CLI use).
"""
if preview:
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=True,
close=close,
)
return
if force:
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=False,
close=close,
)
return
if not sys.stdin.isatty():
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=False,
close=close,
)
return
print("[INFO] Running preview before actual release...\n")
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=True,
close=close,
)
try:
answer = input("Proceed with the actual release? [y/N]: ").strip().lower()
except (EOFError, KeyboardInterrupt):
print("\n[INFO] Release aborted (no confirmation).")
return
if answer not in ("y", "yes"):
print("Release aborted by user. No changes were made.")
return
print("\n[INFO] Running REAL release...\n")
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=False,
close=close,
)
from .workflow import release
__all__ = ["release"]

View File

@@ -1,16 +1,3 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Git-related helpers for the release workflow.
Responsibilities:
- Run Git (or shell) commands with basic error reporting.
- Ensure main/master are synchronized with origin before tagging.
- Maintain the floating 'latest' tag that always points to the newest
release tag.
"""
from __future__ import annotations
import subprocess
@@ -19,77 +6,87 @@ from pkgmgr.core.git import GitError
def run_git_command(cmd: str) -> None:
"""
Run a Git (or shell) command with basic error reporting.
The command is executed via the shell, primarily for readability
when printed (as in 'git commit -am "msg"').
"""
print(f"[GIT] {cmd}")
try:
subprocess.run(cmd, shell=True, check=True)
subprocess.run(
cmd,
shell=True,
check=True,
text=True,
capture_output=True,
)
except subprocess.CalledProcessError as exc:
print(f"[ERROR] Git command failed: {cmd}")
print(f" Exit code: {exc.returncode}")
if exc.stdout:
print("--- stdout ---")
print(exc.stdout)
print("\n" + exc.stdout)
if exc.stderr:
print("--- stderr ---")
print(exc.stderr)
print("\n" + exc.stderr)
raise GitError(f"Git command failed: {cmd}") from exc
def sync_branch_with_remote(branch: str, preview: bool = False) -> None:
"""
Ensure the local main/master branch is up-to-date before tagging.
def _capture(cmd: str) -> str:
res = subprocess.run(cmd, shell=True, check=False, capture_output=True, text=True)
return (res.stdout or "").strip()
Behaviour:
- For main/master: run 'git fetch origin' and 'git pull origin <branch>'.
- For all other branches: only log that no automatic sync is performed.
def ensure_clean_and_synced(preview: bool = False) -> None:
"""
if branch not in ("main", "master"):
print(
f"[INFO] Skipping automatic git pull for non-main/master branch "
f"{branch}."
)
Always run a pull BEFORE modifying anything.
Uses --ff-only to avoid creating merge commits automatically.
If no upstream is configured, we skip.
"""
upstream = _capture("git rev-parse --abbrev-ref --symbolic-full-name @{u} 2>/dev/null")
if not upstream:
print("[INFO] No upstream configured for current branch. Skipping pull.")
return
print(
f"[INFO] Updating branch {branch} from origin before creating tags..."
)
if preview:
print("[PREVIEW] Would run: git fetch origin")
print(f"[PREVIEW] Would run: git pull origin {branch}")
print("[PREVIEW] Would run: git fetch origin --prune --tags --force")
print("[PREVIEW] Would run: git pull --ff-only")
return
run_git_command("git fetch origin")
run_git_command(f"git pull origin {branch}")
print("[INFO] Syncing with remote before making any changes...")
run_git_command("git fetch origin --prune --tags --force")
run_git_command("git pull --ff-only")
def is_highest_version_tag(tag: str) -> bool:
"""
Return True if `tag` is the highest version among all tags matching v*.
Comparison uses `sort -V` for natural version ordering.
"""
all_v = _capture("git tag --list 'v*'")
if not all_v:
return True # No tags yet, so the current tag is the highest
# Get the latest tag in natural version order
latest = _capture("git tag --list 'v*' | sort -V | tail -n1")
print(f"[INFO] Latest tag: {latest}, Current tag: {tag}")
# Ensure that the current tag is always considered the highest if it's the latest one
return tag >= latest # Use comparison operator to consider all future tags
def update_latest_tag(new_tag: str, preview: bool = False) -> None:
"""
Move the floating 'latest' tag to the newly created release tag.
Implementation details:
- We explicitly dereference the tag object via `<tag>^{}` so that
'latest' always points at the underlying commit, not at another tag.
- We create/update 'latest' as an annotated tag with a short message so
Git configurations that enforce annotated/signed tags do not fail
with "no tag message".
Notes:
- We dereference the tag object via `<tag>^{}` so that 'latest' points to the commit.
- 'latest' is forced (floating tag), therefore the push uses --force.
"""
target_ref = f"{new_tag}^{{}}"
print(f"[INFO] Updating 'latest' tag to point at {new_tag} (commit {target_ref})...")
if preview:
print(f"[PREVIEW] Would run: git tag -f -a latest {target_ref} "
f'-m "Floating latest tag for {new_tag}"')
print(
f'[PREVIEW] Would run: git tag -f -a latest {target_ref} '
f'-m "Floating latest tag for {new_tag}"'
)
print("[PREVIEW] Would run: git push origin latest --force")
return
run_git_command(
f'git tag -f -a latest {target_ref} '
f'-m "Floating latest tag for {new_tag}"'
f'git tag -f -a latest {target_ref} -m "Floating latest tag for {new_tag}"'
)
run_git_command("git push origin latest --force")

View File

@@ -0,0 +1,29 @@
from __future__ import annotations
import sys
def should_delete_branch(force: bool) -> bool:
"""
Ask whether the current branch should be deleted after a successful release.
- If force=True: skip prompt and return True.
- If non-interactive stdin: do NOT delete by default.
"""
if force:
return True
if not sys.stdin.isatty():
return False
answer = input("Delete the current branch after release? [y/N] ").strip().lower()
return answer in ("y", "yes")
def confirm_proceed_release() -> bool:
"""
Ask whether to proceed with the REAL release after the preview phase.
"""
try:
answer = input("Proceed with the actual release? [y/N]: ").strip().lower()
except (EOFError, KeyboardInterrupt):
return False
return answer in ("y", "yes")

View File

@@ -0,0 +1,230 @@
from __future__ import annotations
import os
import sys
from typing import Optional
from pkgmgr.actions.branch import close_branch
from pkgmgr.core.git import get_current_branch, GitError
from .files import (
update_changelog,
update_debian_changelog,
update_flake_version,
update_pkgbuild_version,
update_pyproject_version,
update_spec_changelog,
update_spec_version,
)
from .git_ops import (
ensure_clean_and_synced,
is_highest_version_tag,
run_git_command,
update_latest_tag,
)
from .prompts import confirm_proceed_release, should_delete_branch
from .versioning import bump_semver, determine_current_version
def _release_impl(
pyproject_path: str = "pyproject.toml",
changelog_path: str = "CHANGELOG.md",
release_type: str = "patch",
message: Optional[str] = None,
preview: bool = False,
close: bool = False,
force: bool = False,
) -> None:
# Determine current branch early
try:
branch = get_current_branch() or "main"
except GitError:
branch = "main"
print(f"Releasing on branch: {branch}")
# Pull BEFORE making any modifications
ensure_clean_and_synced(preview=preview)
current_ver = determine_current_version()
new_ver = bump_semver(current_ver, release_type)
new_ver_str = str(new_ver)
new_tag = new_ver.to_tag(with_prefix=True)
mode = "PREVIEW" if preview else "REAL"
print(f"Release mode: {mode}")
print(f"Current version: {current_ver}")
print(f"New version: {new_ver_str} ({release_type})")
repo_root = os.path.dirname(os.path.abspath(pyproject_path))
update_pyproject_version(pyproject_path, new_ver_str, preview=preview)
changelog_message = update_changelog(
changelog_path,
new_ver_str,
message=message,
preview=preview,
)
flake_path = os.path.join(repo_root, "flake.nix")
update_flake_version(flake_path, new_ver_str, preview=preview)
pkgbuild_path = os.path.join(repo_root, "PKGBUILD")
update_pkgbuild_version(pkgbuild_path, new_ver_str, preview=preview)
spec_path = os.path.join(repo_root, "package-manager.spec")
update_spec_version(spec_path, new_ver_str, preview=preview)
effective_message: Optional[str] = message
if effective_message is None and isinstance(changelog_message, str):
if changelog_message.strip():
effective_message = changelog_message.strip()
debian_changelog_path = os.path.join(repo_root, "debian", "changelog")
package_name = os.path.basename(repo_root) or "package-manager"
update_debian_changelog(
debian_changelog_path,
package_name=package_name,
new_version=new_ver_str,
message=effective_message,
preview=preview,
)
update_spec_changelog(
spec_path=spec_path,
package_name=package_name,
new_version=new_ver_str,
message=effective_message,
preview=preview,
)
commit_msg = f"Release version {new_ver_str}"
tag_msg = effective_message or commit_msg
files_to_add = [
pyproject_path,
changelog_path,
flake_path,
pkgbuild_path,
spec_path,
debian_changelog_path,
]
existing_files = [p for p in files_to_add if p and os.path.exists(p)]
if preview:
for path in existing_files:
print(f"[PREVIEW] Would run: git add {path}")
print(f'[PREVIEW] Would run: git commit -am "{commit_msg}"')
print(f'[PREVIEW] Would run: git tag -a {new_tag} -m "{tag_msg}"')
print(f"[PREVIEW] Would run: git push origin {branch}")
print(f"[PREVIEW] Would run: git push origin {new_tag}")
if is_highest_version_tag(new_tag):
update_latest_tag(new_tag, preview=True)
else:
print(f"[PREVIEW] Skipping 'latest' update (tag {new_tag} is not the highest).")
if close and branch not in ("main", "master"):
if force:
print(f"[PREVIEW] Would delete branch {branch} (forced).")
else:
print(f"[PREVIEW] Would ask whether to delete branch {branch} after release.")
return
for path in existing_files:
run_git_command(f"git add {path}")
run_git_command(f'git commit -am "{commit_msg}"')
run_git_command(f'git tag -a {new_tag} -m "{tag_msg}"')
# Push branch and ONLY the newly created version tag (no --tags)
run_git_command(f"git push origin {branch}")
run_git_command(f"git push origin {new_tag}")
# Update 'latest' only if this is the highest version tag
try:
if is_highest_version_tag(new_tag):
update_latest_tag(new_tag, preview=False)
else:
print(f"[INFO] Skipping 'latest' update (tag {new_tag} is not the highest).")
except GitError as exc:
print(f"[WARN] Failed to update floating 'latest' tag for {new_tag}: {exc}")
print("'latest' tag was not updated.")
print(f"Release {new_ver_str} completed.")
if close:
if branch in ("main", "master"):
print(f"[INFO] close=True but current branch is {branch}; skipping branch deletion.")
return
if not should_delete_branch(force=force):
print(f"[INFO] Branch deletion declined. Keeping branch {branch}.")
return
print(f"[INFO] Deleting branch {branch} after successful release...")
try:
close_branch(name=branch, base_branch="main", cwd=".")
except Exception as exc:
print(f"[WARN] Failed to close branch {branch} automatically: {exc}")
def release(
pyproject_path: str = "pyproject.toml",
changelog_path: str = "CHANGELOG.md",
release_type: str = "patch",
message: Optional[str] = None,
preview: bool = False,
force: bool = False,
close: bool = False,
) -> None:
if preview:
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=True,
close=close,
force=force,
)
return
# If force or non-interactive: no preview+confirmation step
if force or (not sys.stdin.isatty()):
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=False,
close=close,
force=force,
)
return
print("[INFO] Running preview before actual release...\n")
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=True,
close=close,
force=force,
)
if not confirm_proceed_release():
print()
return
print("\n[INFO] Running REAL release...\n")
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=False,
close=close,
force=force,
)

View File

@@ -18,52 +18,17 @@ USER_CONFIG_PATH = os.path.expanduser("~/.config/pkgmgr/config.yaml")
DESCRIPTION_TEXT = """\
\033[1;32mPackage Manager 🤖📦\033[0m
\033[3mKevin's Package Manager is a multi-repository, multi-package, and multi-format
development tool crafted by and designed for:\033[0m
\033[1;34mKevin Veen-Birkenbach\033[0m
\033[4mhttps://www.veen.world/\033[0m
\033[3mKevin's multi-distro package and workflow manager.\033[0m
\033[1;34mKevin Veen-Birkenbach\033[0m \033[4mhttps://www.veen.world/\033[0m
\033[1mOverview:\033[0m
A powerful toolchain that unifies and automates workflows across heterogeneous
project ecosystems. pkgmgr is not only a package manager — it is a full
developer-oriented orchestration tool.
Built in \033[1;33mPython\033[0m on top of \033[1;33mNix flakes\033[0m to manage many
repositories and packaging formats (pyproject.toml, flake.nix,
PKGBUILD, debian, Ansible, …) with one CLI.
It automatically detects, merges, and processes metadata from multiple
dependency formats, including:
\033[1;33mPython:\033[0m pyproject.toml, requirements.txt
\033[1;33mNix:\033[0m flake.nix
\033[1;33mArch Linux:\033[0m PKGBUILD
\033[1;33mAnsible:\033[0m requirements.yml
This allows pkgmgr to perform installation, updates, verification, dependency
resolution, and synchronization across complex multi-repo environments — with a
single unified command-line interface.
\033[1mDeveloper Tools:\033[0m
pkgmgr includes an integrated toolbox to enhance daily development workflows:
\033[1;33mVS Code integration:\033[0m Auto-generate and open multi-repo workspaces
\033[1;33mTerminal integration:\033[0m Open repositories in new GNOME Terminal tabs
\033[1;33mExplorer integration:\033[0m Open repositories in your file manager
\033[1;33mRelease automation:\033[0m Version bumping, changelog updates, and tagging
\033[1;33mBatch operations:\033[0m Execute shell commands across multiple repositories
\033[1;33mGit/Docker/Make wrappers:\033[0m Unified command proxying for many tools
\033[1mCapabilities:\033[0m
• Clone, pull, verify, update, and manage many repositories at once
• Resolve dependencies across languages and ecosystems
• Standardize install/update workflows
• Create symbolic executable wrappers for any project
• Merge configuration from default + user config layers
Use pkgmgr as both a robust package management framework and a versatile
development orchestration tool.
For detailed help on each command, use:
\033[1mpkgmgr <command> --help\033[0m
For details on any command, run:
\033[1mpkgmgr <command> --help\033[0m
"""
def main() -> None:
"""
Entry point for the pkgmgr CLI.

View File

@@ -6,6 +6,7 @@ from .version import handle_version
from .make import handle_make
from .changelog import handle_changelog
from .branch import handle_branch
from .mirror import handle_mirror_command
__all__ = [
"handle_repos_command",
@@ -16,4 +17,5 @@ __all__ = [
"handle_make",
"handle_changelog",
"handle_branch",
"handle_mirror_command",
]

View File

@@ -3,7 +3,7 @@ from __future__ import annotations
import sys
from pkgmgr.cli.context import CLIContext
from pkgmgr.actions.branch import open_branch, close_branch
from pkgmgr.actions.branch import open_branch, close_branch, drop_branch
def handle_branch(args, ctx: CLIContext) -> None:
@@ -12,7 +12,8 @@ def handle_branch(args, ctx: CLIContext) -> None:
Currently supported:
- pkgmgr branch open [<name>] [--base <branch>]
- pkgmgr branch close [<name>] [--base <branch>]
- pkgmgr branch close [<name>] [--base <branch>] [--force|-f]
- pkgmgr branch drop [<name>] [--base <branch>] [--force|-f]
"""
if args.subcommand == "open":
open_branch(
@@ -27,6 +28,16 @@ def handle_branch(args, ctx: CLIContext) -> None:
name=getattr(args, "name", None),
base_branch=getattr(args, "base", "main"),
cwd=".",
force=getattr(args, "force", False),
)
return
if args.subcommand == "drop":
drop_branch(
name=getattr(args, "name", None),
base_branch=getattr(args, "base", "main"),
cwd=".",
force=getattr(args, "force", False),
)
return

View File

@@ -0,0 +1,118 @@
from __future__ import annotations
import sys
from typing import Any, Dict, List
from pkgmgr.actions.mirror import (
diff_mirrors,
list_mirrors,
merge_mirrors,
setup_mirrors,
)
from pkgmgr.cli.context import CLIContext
Repository = Dict[str, Any]
def handle_mirror_command(
args,
ctx: CLIContext,
selected: List[Repository],
) -> None:
"""
Entry point for 'pkgmgr mirror' subcommands.
Subcommands:
- mirror list → list configured mirrors
- mirror diff → compare config vs MIRRORS file
- mirror merge → merge mirrors between config and MIRRORS file
- mirror setup → configure local Git + remote placeholders
"""
if not selected:
print("[INFO] No repositories selected for 'mirror' command.")
sys.exit(1)
subcommand = getattr(args, "subcommand", None)
# ------------------------------------------------------------
# mirror list
# ------------------------------------------------------------
if subcommand == "list":
source = getattr(args, "source", "all")
list_mirrors(
selected_repos=selected,
repositories_base_dir=ctx.repositories_base_dir,
all_repos=ctx.all_repositories,
source=source,
)
return
# ------------------------------------------------------------
# mirror diff
# ------------------------------------------------------------
if subcommand == "diff":
diff_mirrors(
selected_repos=selected,
repositories_base_dir=ctx.repositories_base_dir,
all_repos=ctx.all_repositories,
)
return
# ------------------------------------------------------------
# mirror merge
# ------------------------------------------------------------
if subcommand == "merge":
source = getattr(args, "source", None)
target = getattr(args, "target", None)
preview = getattr(args, "preview", False)
if source == target:
print(
"[ERROR] For 'mirror merge', source and target "
"must differ (one of: config, file)."
)
sys.exit(2)
# Config file path can be passed explicitly via --config-path.
# If not given, fall back to the global context (if available).
explicit_config_path = getattr(args, "config_path", None)
user_config_path = explicit_config_path or getattr(
ctx, "user_config_path", None
)
merge_mirrors(
selected_repos=selected,
repositories_base_dir=ctx.repositories_base_dir,
all_repos=ctx.all_repositories,
source=source,
target=target,
preview=preview,
user_config_path=user_config_path,
)
return
# ------------------------------------------------------------
# mirror setup
# ------------------------------------------------------------
if subcommand == "setup":
local = getattr(args, "local", False)
remote = getattr(args, "remote", False)
preview = getattr(args, "preview", False)
# If neither flag is set → default to both.
if not local and not remote:
local = True
remote = True
setup_mirrors(
selected_repos=selected,
repositories_base_dir=ctx.repositories_base_dir,
all_repos=ctx.all_repositories,
preview=preview,
local=local,
remote=remote,
)
return
print(f"[ERROR] Unknown mirror subcommand: {subcommand}")
sys.exit(2)

View File

@@ -21,9 +21,9 @@ from pkgmgr.cli.commands import (
handle_make,
handle_changelog,
handle_branch,
handle_mirror_command,
)
def _has_explicit_selection(args) -> bool:
"""
Return True if the user explicitly selected repositories via
@@ -108,6 +108,7 @@ def dispatch_command(args, ctx: CLIContext) -> None:
"explore",
"terminal",
"code",
"mirror",
]
if getattr(args, "command", None) in commands_with_selection:
@@ -174,5 +175,9 @@ def dispatch_command(args, ctx: CLIContext) -> None:
handle_branch(args, ctx)
return
if args.command == "mirror":
handle_mirror_command(args, ctx, selected)
return
print(f"Unknown command: {args.command}")
sys.exit(2)

View File

@@ -1,505 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import argparse
from pkgmgr.cli.proxy import register_proxy_commands
class SortedSubParsersAction(argparse._SubParsersAction):
"""
Subparsers action that keeps choices sorted alphabetically.
"""
def add_parser(self, name, **kwargs):
parser = super().add_parser(name, **kwargs)
# Sort choices alphabetically by dest (subcommand name)
self._choices_actions.sort(key=lambda a: a.dest)
return parser
def add_identifier_arguments(subparser: argparse.ArgumentParser) -> None:
"""
Common identifier / selection arguments for many subcommands.
Selection modes (mutual intent, not hard-enforced):
- identifiers (positional): select by alias / provider/account/repo
- --all: select all repositories
- --category / --string / --tag: filter-based selection on top
of the full repository set
"""
subparser.add_argument(
"identifiers",
nargs="*",
help=(
"Identifier(s) for repositories. "
"Default: Repository of current folder."
),
)
subparser.add_argument(
"--all",
action="store_true",
default=False,
help=(
"Apply the subcommand to all repositories in the config. "
"Some subcommands ask for confirmation. If you want to give this "
"confirmation for all repositories, pipe 'yes'. E.g: "
"yes | pkgmgr {subcommand} --all"
),
)
subparser.add_argument(
"--category",
nargs="+",
default=[],
help=(
"Filter repositories by category patterns derived from config "
"filenames or repo metadata (use filename without .yml/.yaml, "
"or /regex/ to use a regular expression)."
),
)
subparser.add_argument(
"--string",
default="",
help=(
"Filter repositories whose identifier / name / path contains this "
"substring (case-insensitive). Use /regex/ for regular expressions."
),
)
subparser.add_argument(
"--tag",
action="append",
default=[],
help=(
"Filter repositories by tag. Matches tags from the repository "
"collector and category tags. Use /regex/ for regular expressions."
),
)
subparser.add_argument(
"--preview",
action="store_true",
help="Preview changes without executing commands",
)
subparser.add_argument(
"--list",
action="store_true",
help="List affected repositories (with preview or status)",
)
subparser.add_argument(
"-a",
"--args",
nargs=argparse.REMAINDER,
dest="extra_args",
help="Additional parameters to be attached.",
default=[],
)
def add_install_update_arguments(subparser: argparse.ArgumentParser) -> None:
"""
Common arguments for install/update commands.
"""
add_identifier_arguments(subparser)
subparser.add_argument(
"-q",
"--quiet",
action="store_true",
help="Suppress warnings and info messages",
)
subparser.add_argument(
"--no-verification",
action="store_true",
default=False,
help="Disable verification via commit/gpg",
)
subparser.add_argument(
"--dependencies",
action="store_true",
help="Also pull and update dependencies",
)
subparser.add_argument(
"--clone-mode",
choices=["ssh", "https", "shallow"],
default="ssh",
help=(
"Specify the clone mode: ssh, https, or shallow "
"(HTTPS shallow clone; default: ssh)"
),
)
def create_parser(description_text: str) -> argparse.ArgumentParser:
"""
Create the top-level argument parser for pkgmgr.
"""
parser = argparse.ArgumentParser(
description=description_text,
formatter_class=argparse.RawTextHelpFormatter,
)
subparsers = parser.add_subparsers(
dest="command",
help="Subcommands",
action=SortedSubParsersAction,
)
# ------------------------------------------------------------
# install / update / deinstall / delete
# ------------------------------------------------------------
install_parser = subparsers.add_parser(
"install",
help="Setup repository/repositories alias links to executables",
)
add_install_update_arguments(install_parser)
update_parser = subparsers.add_parser(
"update",
help="Update (pull + install) repository/repositories",
)
add_install_update_arguments(update_parser)
update_parser.add_argument(
"--system",
action="store_true",
help="Include system update commands",
)
deinstall_parser = subparsers.add_parser(
"deinstall",
help="Remove alias links to repository/repositories",
)
add_identifier_arguments(deinstall_parser)
delete_parser = subparsers.add_parser(
"delete",
help="Delete repository/repositories alias links to executables",
)
add_identifier_arguments(delete_parser)
# ------------------------------------------------------------
# create
# ------------------------------------------------------------
create_cmd_parser = subparsers.add_parser(
"create",
help=(
"Create new repository entries: add them to the config if not "
"already present, initialize the local repository, and push "
"remotely if --remote is set."
),
)
add_identifier_arguments(create_cmd_parser)
create_cmd_parser.add_argument(
"--remote",
action="store_true",
help="If set, add the remote and push the initial commit.",
)
# ------------------------------------------------------------
# status
# ------------------------------------------------------------
status_parser = subparsers.add_parser(
"status",
help="Show status for repository/repositories or system",
)
add_identifier_arguments(status_parser)
status_parser.add_argument(
"--system",
action="store_true",
help="Show system status",
)
# ------------------------------------------------------------
# config
# ------------------------------------------------------------
config_parser = subparsers.add_parser(
"config",
help="Manage configuration",
)
config_subparsers = config_parser.add_subparsers(
dest="subcommand",
help="Config subcommands",
required=True,
)
config_show = config_subparsers.add_parser(
"show",
help="Show configuration",
)
add_identifier_arguments(config_show)
config_subparsers.add_parser(
"add",
help="Interactively add a new repository entry",
)
config_subparsers.add_parser(
"edit",
help="Edit configuration file with nano",
)
config_subparsers.add_parser(
"init",
help="Initialize user configuration by scanning the base directory",
)
config_delete = config_subparsers.add_parser(
"delete",
help="Delete repository entry from user config",
)
add_identifier_arguments(config_delete)
config_ignore = config_subparsers.add_parser(
"ignore",
help="Set ignore flag for repository entries in user config",
)
add_identifier_arguments(config_ignore)
config_ignore.add_argument(
"--set",
choices=["true", "false"],
required=True,
help="Set ignore to true or false",
)
config_subparsers.add_parser(
"update",
help=(
"Update default config files in ~/.config/pkgmgr/ from the "
"installed pkgmgr package (does not touch config.yaml)."
),
)
# ------------------------------------------------------------
# path / explore / terminal / code / shell
# ------------------------------------------------------------
path_parser = subparsers.add_parser(
"path",
help="Print the path(s) of repository/repositories",
)
add_identifier_arguments(path_parser)
explore_parser = subparsers.add_parser(
"explore",
help="Open repository in Nautilus file manager",
)
add_identifier_arguments(explore_parser)
terminal_parser = subparsers.add_parser(
"terminal",
help="Open repository in a new GNOME Terminal tab",
)
add_identifier_arguments(terminal_parser)
code_parser = subparsers.add_parser(
"code",
help="Open repository workspace with VS Code",
)
add_identifier_arguments(code_parser)
shell_parser = subparsers.add_parser(
"shell",
help="Execute a shell command in each repository",
)
add_identifier_arguments(shell_parser)
shell_parser.add_argument(
"-c",
"--command",
nargs=argparse.REMAINDER,
dest="shell_command",
help=(
"The shell command (and its arguments) to execute in each "
"repository"
),
default=[],
)
# ------------------------------------------------------------
# branch
# ------------------------------------------------------------
branch_parser = subparsers.add_parser(
"branch",
help="Branch-related utilities (e.g. open/close feature branches)",
)
branch_subparsers = branch_parser.add_subparsers(
dest="subcommand",
help="Branch subcommands",
required=True,
)
branch_open = branch_subparsers.add_parser(
"open",
help="Create and push a new branch on top of a base branch",
)
branch_open.add_argument(
"name",
nargs="?",
help=(
"Name of the new branch (optional; will be asked interactively "
"if omitted)"
),
)
branch_open.add_argument(
"--base",
default="main",
help="Base branch to create the new branch from (default: main)",
)
branch_close = branch_subparsers.add_parser(
"close",
help="Merge a feature branch into base and delete it",
)
branch_close.add_argument(
"name",
nargs="?",
help=(
"Name of the branch to close (optional; current branch is used "
"if omitted)"
),
)
branch_close.add_argument(
"--base",
default="main",
help=(
"Base branch to merge into (default: main; falls back to master "
"internally if main does not exist)"
),
)
# ------------------------------------------------------------
# release
# ------------------------------------------------------------
release_parser = subparsers.add_parser(
"release",
help=(
"Create a release for repository/ies by incrementing version "
"and updating the changelog."
),
)
release_parser.add_argument(
"release_type",
choices=["major", "minor", "patch"],
help="Type of version increment for the release (major, minor, patch).",
)
release_parser.add_argument(
"-m",
"--message",
default=None,
help=(
"Optional release message to add to the changelog and tag."
),
)
# Generic selection / preview / list / extra_args
add_identifier_arguments(release_parser)
# Close current branch after successful release
release_parser.add_argument(
"--close",
action="store_true",
help=(
"Close the current branch after a successful release in each "
"repository, if it is not main/master."
),
)
# Force: skip preview+confirmation and run release directly
release_parser.add_argument(
"-f",
"--force",
action="store_true",
help=(
"Skip the interactive preview+confirmation step and run the "
"release directly."
),
)
# ------------------------------------------------------------
# version
# ------------------------------------------------------------
version_parser = subparsers.add_parser(
"version",
help=(
"Show version information for repository/ies "
"(git tags, pyproject.toml, flake.nix, PKGBUILD, debian, spec, "
"Ansible Galaxy)."
),
)
add_identifier_arguments(version_parser)
# ------------------------------------------------------------
# changelog
# ------------------------------------------------------------
changelog_parser = subparsers.add_parser(
"changelog",
help=(
"Show changelog derived from Git history. "
"By default, shows the changes between the last two SemVer tags."
),
)
changelog_parser.add_argument(
"range",
nargs="?",
default="",
help=(
"Optional tag or range (e.g. v1.2.3 or v1.2.0..v1.2.3). "
"If omitted, the changelog between the last two SemVer "
"tags is shown."
),
)
add_identifier_arguments(changelog_parser)
# ------------------------------------------------------------
# list
# ------------------------------------------------------------
list_parser = subparsers.add_parser(
"list",
help="List all repositories with details and status",
)
# dieselbe Selektionslogik wie bei install/update/etc.:
add_identifier_arguments(list_parser)
list_parser.add_argument(
"--status",
type=str,
default="",
help=(
"Filter repositories by status (case insensitive). "
"Use /regex/ for regular expressions."
),
)
list_parser.add_argument(
"--description",
action="store_true",
help=(
"Show an additional detailed section per repository "
"(description, homepage, tags, categories, paths)."
),
)
# ------------------------------------------------------------
# make
# ------------------------------------------------------------
make_parser = subparsers.add_parser(
"make",
help="Executes make commands",
)
add_identifier_arguments(make_parser)
make_subparsers = make_parser.add_subparsers(
dest="subcommand",
help="Make subcommands",
required=True,
)
make_install = make_subparsers.add_parser(
"install",
help="Executes the make install command",
)
add_identifier_arguments(make_install)
make_deinstall = make_subparsers.add_parser(
"deinstall",
help="Executes the make deinstall command",
)
add_identifier_arguments(make_deinstall)
# ------------------------------------------------------------
# Proxy commands (git, docker, docker compose, ...)
# ------------------------------------------------------------
register_proxy_commands(subparsers)
return parser

View File

@@ -0,0 +1,68 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import argparse
from pkgmgr.cli.proxy import register_proxy_commands
from .common import SortedSubParsersAction
from .install_update import add_install_update_subparsers
from .config_cmd import add_config_subparsers
from .navigation_cmd import add_navigation_subparsers
from .branch_cmd import add_branch_subparsers
from .release_cmd import add_release_subparser
from .version_cmd import add_version_subparser
from .changelog_cmd import add_changelog_subparser
from .list_cmd import add_list_subparser
from .make_cmd import add_make_subparsers
from .mirror_cmd import add_mirror_subparsers
def create_parser(description_text: str) -> argparse.ArgumentParser:
"""
Create the top-level argument parser for pkgmgr.
"""
parser = argparse.ArgumentParser(
description=description_text,
formatter_class=argparse.RawTextHelpFormatter,
)
subparsers = parser.add_subparsers(
dest="command",
help="Subcommands",
action=SortedSubParsersAction,
)
# Core repo operations
add_install_update_subparsers(subparsers)
add_config_subparsers(subparsers)
# Navigation / tooling around repos
add_navigation_subparsers(subparsers)
# Branch & release workflow
add_branch_subparsers(subparsers)
add_release_subparser(subparsers)
# Info commands
add_version_subparser(subparsers)
add_changelog_subparser(subparsers)
add_list_subparser(subparsers)
# Make wrapper
add_make_subparsers(subparsers)
# Mirror management
add_mirror_subparsers(subparsers)
# Proxy commands (git, docker, docker compose, ...)
register_proxy_commands(subparsers)
return parser
__all__ = [
"create_parser",
"SortedSubParsersAction",
]

View File

@@ -0,0 +1,104 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import argparse
def add_branch_subparsers(
subparsers: argparse._SubParsersAction,
) -> None:
"""
Register branch command and its subcommands.
"""
branch_parser = subparsers.add_parser(
"branch",
help="Branch-related utilities (e.g. open/close/drop feature branches)",
)
branch_subparsers = branch_parser.add_subparsers(
dest="subcommand",
help="Branch subcommands",
required=True,
)
# -----------------------------------------------------------------------
# branch open
# -----------------------------------------------------------------------
branch_open = branch_subparsers.add_parser(
"open",
help="Create and push a new branch on top of a base branch",
)
branch_open.add_argument(
"name",
nargs="?",
help=(
"Name of the new branch (optional; will be asked interactively "
"if omitted)"
),
)
branch_open.add_argument(
"--base",
default="main",
help="Base branch to create the new branch from (default: main)",
)
# -----------------------------------------------------------------------
# branch close
# -----------------------------------------------------------------------
branch_close = branch_subparsers.add_parser(
"close",
help="Merge a feature branch into base and delete it",
)
branch_close.add_argument(
"name",
nargs="?",
help=(
"Name of the branch to close (optional; current branch is used "
"if omitted)"
),
)
branch_close.add_argument(
"--base",
default="main",
help=(
"Base branch to merge into (default: main; falls back to master "
"internally if main does not exist)"
),
)
branch_close.add_argument(
"-f",
"--force",
action="store_true",
help="Skip confirmation prompt and close the branch directly",
)
# -----------------------------------------------------------------------
# branch drop
# -----------------------------------------------------------------------
branch_drop = branch_subparsers.add_parser(
"drop",
help="Delete a branch locally and on origin (without merging)",
)
branch_drop.add_argument(
"name",
nargs="?",
help=(
"Name of the branch to drop (optional; current branch is used "
"if omitted)"
),
)
branch_drop.add_argument(
"--base",
default="main",
help=(
"Base branch used to protect main/master from deletion "
"(default: main; falls back to master internally)"
),
)
branch_drop.add_argument(
"-f",
"--force",
action="store_true",
help="Skip confirmation prompt and drop the branch directly",
)

View File

@@ -0,0 +1,34 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import argparse
from .common import add_identifier_arguments
def add_changelog_subparser(
subparsers: argparse._SubParsersAction,
) -> None:
"""
Register the changelog command.
"""
changelog_parser = subparsers.add_parser(
"changelog",
help=(
"Show changelog derived from Git history. "
"By default, shows the changes between the last two SemVer tags."
),
)
changelog_parser.add_argument(
"range",
nargs="?",
default="",
help=(
"Optional tag or range (e.g. v1.2.3 or v1.2.0..v1.2.3). "
"If omitted, the changelog between the last two SemVer "
"tags is shown."
),
)
add_identifier_arguments(changelog_parser)

View File

@@ -0,0 +1,127 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import argparse
class SortedSubParsersAction(argparse._SubParsersAction):
"""
Subparsers action that keeps choices sorted alphabetically.
"""
def add_parser(self, name, **kwargs):
parser = super().add_parser(name, **kwargs)
# Sort choices alphabetically by dest (subcommand name)
self._choices_actions.sort(key=lambda a: a.dest)
return parser
def add_identifier_arguments(subparser: argparse.ArgumentParser) -> None:
"""
Common identifier / selection arguments for many subcommands.
Selection modes (mutual intent, not hard-enforced):
- identifiers (positional): select by alias / provider/account/repo
- --all: select all repositories
- --category / --string / --tag: filter-based selection on top
of the full repository set
"""
subparser.add_argument(
"identifiers",
nargs="*",
help=(
"Identifier(s) for repositories. "
"Default: Repository of current folder."
),
)
subparser.add_argument(
"--all",
action="store_true",
default=False,
help=(
"Apply the subcommand to all repositories in the config. "
"Some subcommands ask for confirmation. If you want to give this "
"confirmation for all repositories, pipe 'yes'. E.g: "
"yes | pkgmgr {subcommand} --all"
),
)
subparser.add_argument(
"--category",
nargs="+",
default=[],
help=(
"Filter repositories by category patterns derived from config "
"filenames or repo metadata (use filename without .yml/.yaml, "
"or /regex/ to use a regular expression)."
),
)
subparser.add_argument(
"--string",
default="",
help=(
"Filter repositories whose identifier / name / path contains this "
"substring (case-insensitive). Use /regex/ for regular expressions."
),
)
subparser.add_argument(
"--tag",
action="append",
default=[],
help=(
"Filter repositories by tag. Matches tags from the repository "
"collector and category tags. Use /regex/ for regular expressions."
),
)
subparser.add_argument(
"--preview",
action="store_true",
help="Preview changes without executing commands",
)
subparser.add_argument(
"--list",
action="store_true",
help="List affected repositories (with preview or status)",
)
subparser.add_argument(
"-a",
"--args",
nargs=argparse.REMAINDER,
dest="extra_args",
help="Additional parameters to be attached.",
default=[],
)
def add_install_update_arguments(subparser: argparse.ArgumentParser) -> None:
"""
Common arguments for install/update commands.
"""
add_identifier_arguments(subparser)
subparser.add_argument(
"-q",
"--quiet",
action="store_true",
help="Suppress warnings and info messages",
)
subparser.add_argument(
"--no-verification",
action="store_true",
default=False,
help="Disable verification via commit/gpg",
)
subparser.add_argument(
"--dependencies",
action="store_true",
help="Also pull and update dependencies",
)
subparser.add_argument(
"--clone-mode",
choices=["ssh", "https", "shallow"],
default="ssh",
help=(
"Specify the clone mode: ssh, https, or shallow "
"(HTTPS shallow clone; default: ssh)"
),
)

View File

@@ -0,0 +1,72 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import argparse
from .common import add_identifier_arguments
def add_config_subparsers(
subparsers: argparse._SubParsersAction,
) -> None:
"""
Register config command and its subcommands.
"""
config_parser = subparsers.add_parser(
"config",
help="Manage configuration",
)
config_subparsers = config_parser.add_subparsers(
dest="subcommand",
help="Config subcommands",
required=True,
)
config_show = config_subparsers.add_parser(
"show",
help="Show configuration",
)
add_identifier_arguments(config_show)
config_subparsers.add_parser(
"add",
help="Interactively add a new repository entry",
)
config_subparsers.add_parser(
"edit",
help="Edit configuration file with nano",
)
config_subparsers.add_parser(
"init",
help="Initialize user configuration by scanning the base directory",
)
config_delete = config_subparsers.add_parser(
"delete",
help="Delete repository entry from user config",
)
add_identifier_arguments(config_delete)
config_ignore = config_subparsers.add_parser(
"ignore",
help="Set ignore flag for repository entries in user config",
)
add_identifier_arguments(config_ignore)
config_ignore.add_argument(
"--set",
choices=["true", "false"],
required=True,
help="Set ignore to true or false",
)
config_subparsers.add_parser(
"update",
help=(
"Update default config files in ~/.config/pkgmgr/ from the "
"installed pkgmgr package (does not touch config.yaml)."
),
)

View File

@@ -0,0 +1,44 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import argparse
from .common import add_install_update_arguments, add_identifier_arguments
def add_install_update_subparsers(
subparsers: argparse._SubParsersAction,
) -> None:
"""
Register install / update / deinstall / delete commands.
"""
install_parser = subparsers.add_parser(
"install",
help="Setup repository/repositories alias links to executables",
)
add_install_update_arguments(install_parser)
update_parser = subparsers.add_parser(
"update",
help="Update (pull + install) repository/repositories",
)
add_install_update_arguments(update_parser)
update_parser.add_argument(
"--system",
action="store_true",
help="Include system update commands",
)
deinstall_parser = subparsers.add_parser(
"deinstall",
help="Remove alias links to repository/repositories",
)
add_identifier_arguments(deinstall_parser)
delete_parser = subparsers.add_parser(
"delete",
help="Delete repository/repositories alias links to executables",
)
add_identifier_arguments(delete_parser)

View File

@@ -0,0 +1,38 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import argparse
from .common import add_identifier_arguments
def add_list_subparser(
subparsers: argparse._SubParsersAction,
) -> None:
"""
Register the list command.
"""
list_parser = subparsers.add_parser(
"list",
help="List all repositories with details and status",
)
add_identifier_arguments(list_parser)
list_parser.add_argument(
"--status",
type=str,
default="",
help=(
"Filter repositories by status (case insensitive). "
"Use /regex/ for regular expressions."
),
)
list_parser.add_argument(
"--description",
action="store_true",
help=(
"Show an additional detailed section per repository "
"(description, homepage, tags, categories, paths)."
),
)

View File

@@ -0,0 +1,38 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import argparse
from .common import add_identifier_arguments
def add_make_subparsers(
subparsers: argparse._SubParsersAction,
) -> None:
"""
Register make command and its subcommands.
"""
make_parser = subparsers.add_parser(
"make",
help="Executes make commands",
)
add_identifier_arguments(make_parser)
make_subparsers = make_parser.add_subparsers(
dest="subcommand",
help="Make subcommands",
required=True,
)
make_install = make_subparsers.add_parser(
"install",
help="Executes the make install command",
)
add_identifier_arguments(make_install)
make_deinstall = make_subparsers.add_parser(
"deinstall",
help="Executes the make deinstall command",
)
add_identifier_arguments(make_deinstall)

View File

@@ -0,0 +1,110 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import argparse
from .common import add_identifier_arguments
def add_mirror_subparsers(
subparsers: argparse._SubParsersAction,
) -> None:
"""
Register mirror command and its subcommands (list, diff, merge, setup).
"""
mirror_parser = subparsers.add_parser(
"mirror",
help="Mirror-related utilities (list, diff, merge, setup)",
)
mirror_subparsers = mirror_parser.add_subparsers(
dest="subcommand",
help="Mirror subcommands",
required=True,
)
# ------------------------------------------------------------------
# mirror list
# ------------------------------------------------------------------
mirror_list = mirror_subparsers.add_parser(
"list",
help="List configured mirrors for repositories",
)
add_identifier_arguments(mirror_list)
mirror_list.add_argument(
"--source",
choices=["all", "config", "file", "resolved"],
default="all",
help="Which mirror source to show.",
)
# ------------------------------------------------------------------
# mirror diff
# ------------------------------------------------------------------
mirror_diff = mirror_subparsers.add_parser(
"diff",
help="Show differences between config mirrors and MIRRORS file",
)
add_identifier_arguments(mirror_diff)
# ------------------------------------------------------------------
# mirror merge {config,file} {config,file}
# ------------------------------------------------------------------
mirror_merge = mirror_subparsers.add_parser(
"merge",
help=(
"Merge mirrors between config and MIRRORS file "
"(example: pkgmgr mirror merge config file --all)"
),
)
# First define merge direction positionals, then selection args.
mirror_merge.add_argument(
"source",
choices=["config", "file"],
help="Source of mirrors.",
)
mirror_merge.add_argument(
"target",
choices=["config", "file"],
help="Target of mirrors.",
)
# Selection / filter / preview arguments
add_identifier_arguments(mirror_merge)
mirror_merge.add_argument(
"--config-path",
help=(
"Path to the user config file to update. "
"If omitted, the global config path is used."
),
)
# Note: --preview, --all, --category, --tag, --list, etc. are provided
# by add_identifier_arguments().
# ------------------------------------------------------------------
# mirror setup
# ------------------------------------------------------------------
mirror_setup = mirror_subparsers.add_parser(
"setup",
help=(
"Setup mirror configuration for repositories.\n"
" --local → configure local Git (remotes, pushurls)\n"
" --remote → create remote repositories if missing\n"
"Default: both local and remote."
),
)
add_identifier_arguments(mirror_setup)
mirror_setup.add_argument(
"--local",
action="store_true",
help="Only configure the local Git repository.",
)
mirror_setup.add_argument(
"--remote",
action="store_true",
help="Only operate on remote repositories.",
)
# Note: --preview also comes from add_identifier_arguments().

View File

@@ -0,0 +1,56 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import argparse
from .common import add_identifier_arguments
def add_navigation_subparsers(
subparsers: argparse._SubParsersAction,
) -> None:
"""
Register path / explore / terminal / code / shell commands.
"""
path_parser = subparsers.add_parser(
"path",
help="Print the path(s) of repository/repositories",
)
add_identifier_arguments(path_parser)
explore_parser = subparsers.add_parser(
"explore",
help="Open repository in Nautilus file manager",
)
add_identifier_arguments(explore_parser)
terminal_parser = subparsers.add_parser(
"terminal",
help="Open repository in a new GNOME Terminal tab",
)
add_identifier_arguments(terminal_parser)
code_parser = subparsers.add_parser(
"code",
help="Open repository workspace with VS Code",
)
add_identifier_arguments(code_parser)
shell_parser = subparsers.add_parser(
"shell",
help="Execute a shell command in each repository",
)
add_identifier_arguments(shell_parser)
shell_parser.add_argument(
"-c",
"--command",
nargs=argparse.REMAINDER,
dest="shell_command",
help=(
"The shell command (and its arguments) to execute in each "
"repository"
),
default=[],
)

View File

@@ -0,0 +1,57 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import argparse
from .common import add_identifier_arguments
def add_release_subparser(
subparsers: argparse._SubParsersAction,
) -> None:
"""
Register the release command.
"""
release_parser = subparsers.add_parser(
"release",
help=(
"Create a release for repository/ies by incrementing version "
"and updating the changelog."
),
)
release_parser.add_argument(
"release_type",
choices=["major", "minor", "patch"],
help="Type of version increment for the release (major, minor, patch).",
)
release_parser.add_argument(
"-m",
"--message",
default=None,
help=(
"Optional release message to add to the changelog and tag."
),
)
# Generic selection / preview / list / extra_args
add_identifier_arguments(release_parser)
# Close current branch after successful release
release_parser.add_argument(
"--close",
action="store_true",
help=(
"Close the current branch after a successful release in each "
"repository, if it is not main/master."
),
)
# Force: skip preview+confirmation and run release directly
release_parser.add_argument(
"-f",
"--force",
action="store_true",
help=(
"Skip the interactive preview+confirmation step and run the "
"release directly."
),
)

View File

@@ -0,0 +1,25 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import argparse
from .common import add_identifier_arguments
def add_version_subparser(
subparsers: argparse._SubParsersAction,
) -> None:
"""
Register the version command.
"""
version_parser = subparsers.add_parser(
"version",
help=(
"Show version information for repository/ies "
"(git tags, pyproject.toml, flake.nix, PKGBUILD, debian, spec, "
"Ansible Galaxy)."
),
)
add_identifier_arguments(version_parser)

View File

@@ -0,0 +1,80 @@
from __future__ import annotations
import io
import runpy
import sys
import unittest
from contextlib import redirect_stdout, redirect_stderr
def _run_pkgmgr_help(argv_tail: list[str]) -> str:
"""
Run `pkgmgr <argv_tail> --help` via the main module and return captured output.
argparse parses sys.argv[1:], so argv[0] must be a dummy program name.
Any SystemExit with code 0 or None is treated as success.
"""
original_argv = list(sys.argv)
buffer = io.StringIO()
cmd_repr = "pkgmgr " + " ".join(argv_tail) + " --help"
try:
# IMPORTANT: argv[0] must be a dummy program name
sys.argv = ["pkgmgr"] + list(argv_tail) + ["--help"]
try:
with redirect_stdout(buffer), redirect_stderr(buffer):
runpy.run_module("main", run_name="__main__")
except SystemExit as exc:
code = exc.code if isinstance(exc.code, int) else None
if code not in (0, None):
raise AssertionError(
f"{cmd_repr!r} failed with exit code {exc.code}."
) from exc
return buffer.getvalue()
finally:
sys.argv = original_argv
class TestBranchHelpE2E(unittest.TestCase):
"""
End-to-end tests ensuring that `pkgmgr branch` help commands
run without error and print usage information.
"""
def test_branch_root_help(self) -> None:
"""
`pkgmgr branch --help` should run without error.
"""
output = _run_pkgmgr_help(["branch"])
self.assertIn("usage:", output)
self.assertIn("pkgmgr branch", output)
def test_branch_open_help(self) -> None:
"""
`pkgmgr branch open --help` should run without error.
"""
output = _run_pkgmgr_help(["branch", "open"])
self.assertIn("usage:", output)
self.assertIn("branch open", output)
def test_branch_close_help(self) -> None:
"""
`pkgmgr branch close --help` should run without error.
"""
output = _run_pkgmgr_help(["branch", "close"])
self.assertIn("usage:", output)
self.assertIn("branch close", output)
def test_branch_drop_help(self) -> None:
"""
`pkgmgr branch drop --help` should run without error.
"""
output = _run_pkgmgr_help(["branch", "drop"])
self.assertIn("usage:", output)
self.assertIn("branch drop", output)
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,143 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
E2E integration tests for the `pkgmgr mirror` command family.
This test class covers:
- pkgmgr mirror --help
- pkgmgr mirror list --preview --all
- pkgmgr mirror diff --preview --all
- pkgmgr mirror merge config file --preview --all
- pkgmgr mirror setup --preview --all
All of these subcommands are fully wired at CLI level and do not
require mocks. With --preview, merge and setup do not perform
destructive actions, making them safe for CI execution.
"""
from __future__ import annotations
import io
import runpy
import sys
import unittest
from contextlib import redirect_stdout, redirect_stderr
class TestIntegrationMirrorCommands(unittest.TestCase):
"""
E2E tests for `pkgmgr mirror` commands.
"""
# ------------------------------------------------------------
# Helper
# ------------------------------------------------------------
def _run_pkgmgr(self, args: list[str]) -> str:
"""
Execute pkgmgr with the given arguments and return captured stdout+stderr.
- Treat SystemExit(0) or SystemExit(None) as success.
- Convert non-zero exit codes into AssertionError.
"""
original_argv = list(sys.argv)
buffer = io.StringIO()
cmd_repr = "pkgmgr " + " ".join(args)
try:
sys.argv = ["pkgmgr"] + args
try:
with redirect_stdout(buffer), redirect_stderr(buffer):
runpy.run_module("main", run_name="__main__")
except SystemExit as exc:
code = exc.code if isinstance(exc.code, int) else None
if code not in (0, None):
raise AssertionError(
f"{cmd_repr!r} failed with exit code {exc.code}. "
"Scroll up to inspect the pkgmgr output."
) from exc
return buffer.getvalue()
finally:
sys.argv = original_argv
# ------------------------------------------------------------
# Tests
# ------------------------------------------------------------
def test_mirror_help(self) -> None:
"""
Ensure `pkgmgr mirror --help` runs successfully
and prints a usage message for the mirror command.
"""
output = self._run_pkgmgr(["mirror", "--help"])
self.assertIn("usage:", output)
self.assertIn("pkgmgr mirror", output)
def test_mirror_list_preview_all(self) -> None:
"""
`pkgmgr mirror list --preview --all` should run without error
and produce some output for the selected repositories.
"""
output = self._run_pkgmgr(["mirror", "list", "--preview", "--all"])
# Do not assert specific wording; just ensure something was printed.
self.assertTrue(
output.strip(),
msg="Expected `pkgmgr mirror list --preview --all` to produce output.",
)
def test_mirror_diff_preview_all(self) -> None:
"""
`pkgmgr mirror diff --preview --all` should run without error
and produce some diagnostic output (diff header, etc.).
"""
output = self._run_pkgmgr(["mirror", "diff", "--preview", "--all"])
self.assertTrue(
output.strip(),
msg="Expected `pkgmgr mirror diff --preview --all` to produce output.",
)
def test_mirror_merge_config_to_file_preview_all(self) -> None:
"""
`pkgmgr mirror merge config file --preview --all` should run without error.
In preview mode this does not change either config or MIRRORS files;
it only prints what would be merged.
"""
output = self._run_pkgmgr(
[
"mirror",
"merge",
"config",
"file",
"--preview",
"--all",
]
)
self.assertTrue(
output.strip(),
msg=(
"Expected `pkgmgr mirror merge config file --preview --all` "
"to produce output."
),
)
def test_mirror_setup_preview_all(self) -> None:
"""
`pkgmgr mirror setup --preview --all` should run without error.
In preview mode only the intended Git operations and remote
suggestions are printed; no real changes are made.
"""
output = self._run_pkgmgr(["mirror", "setup", "--preview", "--all"])
self.assertTrue(
output.strip(),
msg="Expected `pkgmgr mirror setup --preview --all` to produce output.",
)
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,248 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Integration tests for the `pkgmgr branch` CLI wiring.
These tests verify that:
- The argument parser creates the correct structure for
`branch open`, `branch close` and `branch drop`.
- `handle_branch` calls the corresponding helper functions
with the expected arguments (including base branch, cwd and force).
"""
from __future__ import annotations
import unittest
from unittest.mock import patch
from pkgmgr.cli.parser import create_parser
from pkgmgr.cli.commands.branch import handle_branch
class TestBranchCLI(unittest.TestCase):
"""
Tests for the branch subcommands implemented in the CLI.
"""
def _create_parser(self):
"""
Create the top-level parser with a minimal description.
"""
return create_parser("pkgmgr test parser")
# --------------------------------------------------------------------- #
# branch open
# --------------------------------------------------------------------- #
@patch("pkgmgr.cli.commands.branch.open_branch")
def test_branch_open_with_name_and_base(self, mock_open_branch):
"""
Ensure that `pkgmgr branch open <name> --base <branch>` calls
open_branch() with the correct parameters.
"""
parser = self._create_parser()
args = parser.parse_args(
["branch", "open", "feature/test-branch", "--base", "develop"]
)
# Sanity check: parser wiring
self.assertEqual(args.command, "branch")
self.assertEqual(args.subcommand, "open")
self.assertEqual(args.name, "feature/test-branch")
self.assertEqual(args.base, "develop")
# ctx is currently unused by handle_branch, so we can pass None
handle_branch(args, ctx=None)
mock_open_branch.assert_called_once()
_args, kwargs = mock_open_branch.call_args
self.assertEqual(kwargs.get("name"), "feature/test-branch")
self.assertEqual(kwargs.get("base_branch"), "develop")
self.assertEqual(kwargs.get("cwd"), ".")
@patch("pkgmgr.cli.commands.branch.open_branch")
def test_branch_open_with_name_and_default_base(self, mock_open_branch):
"""
Ensure that `pkgmgr branch open <name>` without --base uses
the default base branch 'main'.
"""
parser = self._create_parser()
args = parser.parse_args(["branch", "open", "feature/default-base"])
self.assertEqual(args.command, "branch")
self.assertEqual(args.subcommand, "open")
self.assertEqual(args.name, "feature/default-base")
self.assertEqual(args.base, "main")
handle_branch(args, ctx=None)
mock_open_branch.assert_called_once()
_args, kwargs = mock_open_branch.call_args
self.assertEqual(kwargs.get("name"), "feature/default-base")
self.assertEqual(kwargs.get("base_branch"), "main")
self.assertEqual(kwargs.get("cwd"), ".")
# --------------------------------------------------------------------- #
# branch close
# --------------------------------------------------------------------- #
@patch("pkgmgr.cli.commands.branch.close_branch")
def test_branch_close_with_name_and_base(self, mock_close_branch):
"""
Ensure that `pkgmgr branch close <name> --base <branch>` calls
close_branch() with the correct parameters and force=False by default.
"""
parser = self._create_parser()
args = parser.parse_args(
["branch", "close", "feature/old-branch", "--base", "main"]
)
# Sanity check: parser wiring
self.assertEqual(args.command, "branch")
self.assertEqual(args.subcommand, "close")
self.assertEqual(args.name, "feature/old-branch")
self.assertEqual(args.base, "main")
self.assertFalse(args.force)
handle_branch(args, ctx=None)
mock_close_branch.assert_called_once()
_args, kwargs = mock_close_branch.call_args
self.assertEqual(kwargs.get("name"), "feature/old-branch")
self.assertEqual(kwargs.get("base_branch"), "main")
self.assertEqual(kwargs.get("cwd"), ".")
self.assertFalse(kwargs.get("force"))
@patch("pkgmgr.cli.commands.branch.close_branch")
def test_branch_close_without_name_uses_none(self, mock_close_branch):
"""
Ensure that `pkgmgr branch close` without a name passes name=None
into close_branch(), leaving branch resolution to the helper.
"""
parser = self._create_parser()
args = parser.parse_args(["branch", "close"])
# Parser wiring: no name → None
self.assertEqual(args.command, "branch")
self.assertEqual(args.subcommand, "close")
self.assertIsNone(args.name)
self.assertEqual(args.base, "main")
self.assertFalse(args.force)
handle_branch(args, ctx=None)
mock_close_branch.assert_called_once()
_args, kwargs = mock_close_branch.call_args
self.assertIsNone(kwargs.get("name"))
self.assertEqual(kwargs.get("base_branch"), "main")
self.assertEqual(kwargs.get("cwd"), ".")
self.assertFalse(kwargs.get("force"))
@patch("pkgmgr.cli.commands.branch.close_branch")
def test_branch_close_with_force(self, mock_close_branch):
"""
Ensure that `pkgmgr branch close <name> --force` passes force=True.
"""
parser = self._create_parser()
args = parser.parse_args(
["branch", "close", "feature/old-branch", "--base", "main", "--force"]
)
self.assertTrue(args.force)
handle_branch(args, ctx=None)
mock_close_branch.assert_called_once()
_args, kwargs = mock_close_branch.call_args
self.assertEqual(kwargs.get("name"), "feature/old-branch")
self.assertEqual(kwargs.get("base_branch"), "main")
self.assertEqual(kwargs.get("cwd"), ".")
self.assertTrue(kwargs.get("force"))
# --------------------------------------------------------------------- #
# branch drop
# --------------------------------------------------------------------- #
@patch("pkgmgr.cli.commands.branch.drop_branch")
def test_branch_drop_with_name_and_base(self, mock_drop_branch):
"""
Ensure that `pkgmgr branch drop <name> --base <branch>` calls
drop_branch() with the correct parameters and force=False by default.
"""
parser = self._create_parser()
args = parser.parse_args(
["branch", "drop", "feature/tmp-branch", "--base", "develop"]
)
self.assertEqual(args.command, "branch")
self.assertEqual(args.subcommand, "drop")
self.assertEqual(args.name, "feature/tmp-branch")
self.assertEqual(args.base, "develop")
self.assertFalse(args.force)
handle_branch(args, ctx=None)
mock_drop_branch.assert_called_once()
_args, kwargs = mock_drop_branch.call_args
self.assertEqual(kwargs.get("name"), "feature/tmp-branch")
self.assertEqual(kwargs.get("base_branch"), "develop")
self.assertEqual(kwargs.get("cwd"), ".")
self.assertFalse(kwargs.get("force"))
@patch("pkgmgr.cli.commands.branch.drop_branch")
def test_branch_drop_without_name(self, mock_drop_branch):
"""
Ensure that `pkgmgr branch drop` without a name passes name=None
into drop_branch(), leaving branch resolution to the helper.
"""
parser = self._create_parser()
args = parser.parse_args(["branch", "drop"])
self.assertEqual(args.command, "branch")
self.assertEqual(args.subcommand, "drop")
self.assertIsNone(args.name)
self.assertEqual(args.base, "main")
self.assertFalse(args.force)
handle_branch(args, ctx=None)
mock_drop_branch.assert_called_once()
_args, kwargs = mock_drop_branch.call_args
self.assertIsNone(kwargs.get("name"))
self.assertEqual(kwargs.get("base_branch"), "main")
self.assertEqual(kwargs.get("cwd"), ".")
self.assertFalse(kwargs.get("force"))
@patch("pkgmgr.cli.commands.branch.drop_branch")
def test_branch_drop_with_force(self, mock_drop_branch):
"""
Ensure that `pkgmgr branch drop <name> --force` passes force=True.
"""
parser = self._create_parser()
args = parser.parse_args(
["branch", "drop", "feature/tmp-branch", "--force"]
)
self.assertTrue(args.force)
handle_branch(args, ctx=None)
mock_drop_branch.assert_called_once()
_args, kwargs = mock_drop_branch.call_args
self.assertEqual(kwargs.get("name"), "feature/tmp-branch")
self.assertEqual(kwargs.get("base_branch"), "main")
self.assertEqual(kwargs.get("cwd"), ".")
self.assertTrue(kwargs.get("force"))
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,33 @@
import unittest
from unittest.mock import patch, MagicMock
from pkgmgr.actions.branch.utils import _resolve_base_branch
from pkgmgr.core.git import GitError
class TestResolveBaseBranch(unittest.TestCase):
@patch("pkgmgr.actions.branch.utils.run_git")
def test_resolves_preferred(self, run_git):
run_git.return_value = None
result = _resolve_base_branch("main", "master", cwd=".")
self.assertEqual(result, "main")
run_git.assert_called_with(["rev-parse", "--verify", "main"], cwd=".")
@patch("pkgmgr.actions.branch.utils.run_git")
def test_resolves_fallback(self, run_git):
run_git.side_effect = [
GitError("main missing"),
None,
]
result = _resolve_base_branch("main", "master", cwd=".")
self.assertEqual(result, "master")
@patch("pkgmgr.actions.branch.utils.run_git")
def test_raises_when_no_branch_exists(self, run_git):
run_git.side_effect = GitError("missing")
with self.assertRaises(RuntimeError):
_resolve_base_branch("main", "master", cwd=".")
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,55 @@
import unittest
from unittest.mock import patch, MagicMock
from pkgmgr.actions.branch.close_branch import close_branch
from pkgmgr.core.git import GitError
class TestCloseBranch(unittest.TestCase):
@patch("pkgmgr.actions.branch.close_branch.input", return_value="y")
@patch("pkgmgr.actions.branch.close_branch.get_current_branch", return_value="feature-x")
@patch("pkgmgr.actions.branch.close_branch._resolve_base_branch", return_value="main")
@patch("pkgmgr.actions.branch.close_branch.run_git")
def test_close_branch_happy_path(self, run_git, resolve, current, input_mock):
close_branch(None, cwd=".")
expected = [
(["fetch", "origin"],),
(["checkout", "main"],),
(["pull", "origin", "main"],),
(["merge", "--no-ff", "feature-x"],),
(["push", "origin", "main"],),
(["branch", "-d", "feature-x"],),
(["push", "origin", "--delete", "feature-x"],),
]
actual = [call.args for call in run_git.call_args_list]
self.assertEqual(actual, expected)
@patch("pkgmgr.actions.branch.close_branch.get_current_branch", return_value="main")
@patch("pkgmgr.actions.branch.close_branch._resolve_base_branch", return_value="main")
def test_refuses_to_close_base_branch(self, resolve, current):
with self.assertRaises(RuntimeError):
close_branch(None)
@patch("pkgmgr.actions.branch.close_branch.input", return_value="n")
@patch("pkgmgr.actions.branch.close_branch.get_current_branch", return_value="feature-x")
@patch("pkgmgr.actions.branch.close_branch._resolve_base_branch", return_value="main")
@patch("pkgmgr.actions.branch.close_branch.run_git")
def test_close_branch_aborts_on_no(self, run_git, resolve, current, input_mock):
close_branch(None, cwd=".")
run_git.assert_not_called()
@patch("pkgmgr.actions.branch.close_branch.get_current_branch", return_value="feature-x")
@patch("pkgmgr.actions.branch.close_branch._resolve_base_branch", return_value="main")
@patch("pkgmgr.actions.branch.close_branch.run_git")
def test_close_branch_force_skips_prompt(self, run_git, resolve, current):
close_branch(None, cwd=".", force=True)
self.assertGreater(len(run_git.call_args_list), 0)
@patch("pkgmgr.actions.branch.close_branch.get_current_branch", side_effect=GitError("fail"))
def test_close_branch_errors_if_cannot_detect_branch(self, current):
with self.assertRaises(RuntimeError):
close_branch(None)
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,50 @@
import unittest
from unittest.mock import patch, MagicMock
from pkgmgr.actions.branch.drop_branch import drop_branch
from pkgmgr.core.git import GitError
class TestDropBranch(unittest.TestCase):
@patch("pkgmgr.actions.branch.drop_branch.input", return_value="y")
@patch("pkgmgr.actions.branch.drop_branch.get_current_branch", return_value="feature-x")
@patch("pkgmgr.actions.branch.drop_branch._resolve_base_branch", return_value="main")
@patch("pkgmgr.actions.branch.drop_branch.run_git")
def test_drop_branch_happy_path(self, run_git, resolve, current, input_mock):
drop_branch(None, cwd=".")
expected = [
(["branch", "-d", "feature-x"],),
(["push", "origin", "--delete", "feature-x"],),
]
actual = [call.args for call in run_git.call_args_list]
self.assertEqual(actual, expected)
@patch("pkgmgr.actions.branch.drop_branch.get_current_branch", return_value="main")
@patch("pkgmgr.actions.branch.drop_branch._resolve_base_branch", return_value="main")
def test_refuses_to_drop_base_branch(self, resolve, current):
with self.assertRaises(RuntimeError):
drop_branch(None)
@patch("pkgmgr.actions.branch.drop_branch.input", return_value="n")
@patch("pkgmgr.actions.branch.drop_branch.get_current_branch", return_value="feature-x")
@patch("pkgmgr.actions.branch.drop_branch._resolve_base_branch", return_value="main")
@patch("pkgmgr.actions.branch.drop_branch.run_git")
def test_drop_branch_aborts_on_no(self, run_git, resolve, current, input_mock):
drop_branch(None, cwd=".")
run_git.assert_not_called()
@patch("pkgmgr.actions.branch.drop_branch.get_current_branch", return_value="feature-x")
@patch("pkgmgr.actions.branch.drop_branch._resolve_base_branch", return_value="main")
@patch("pkgmgr.actions.branch.drop_branch.run_git")
def test_drop_branch_force_skips_prompt(self, run_git, resolve, current):
drop_branch(None, cwd=".", force=True)
self.assertGreater(len(run_git.call_args_list), 0)
@patch("pkgmgr.actions.branch.drop_branch.get_current_branch", side_effect=GitError("fail"))
def test_drop_branch_errors_if_no_branch_detected(self, current):
with self.assertRaises(RuntimeError):
drop_branch(None)
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,37 @@
import unittest
from unittest.mock import patch, MagicMock
from pkgmgr.actions.branch.open_branch import open_branch
class TestOpenBranch(unittest.TestCase):
@patch("pkgmgr.actions.branch.open_branch._resolve_base_branch", return_value="main")
@patch("pkgmgr.actions.branch.open_branch.run_git")
def test_open_branch_executes_git_commands(self, run_git, resolve):
open_branch("feature-x", base_branch="main", cwd=".")
expected_calls = [
(["fetch", "origin"],),
(["checkout", "main"],),
(["pull", "origin", "main"],),
(["checkout", "-b", "feature-x"],),
(["push", "-u", "origin", "feature-x"],),
]
actual = [call.args for call in run_git.call_args_list]
self.assertEqual(actual, expected_calls)
@patch("builtins.input", return_value="auto-branch")
@patch("pkgmgr.actions.branch.open_branch._resolve_base_branch", return_value="main")
@patch("pkgmgr.actions.branch.open_branch.run_git")
def test_open_branch_prompts_for_name(self, run_git, resolve, input_mock):
open_branch(None)
calls = [call.args for call in run_git.call_args_list]
self.assertEqual(calls[3][0][0], "checkout") # verify git executed normally
def test_open_branch_rejects_empty_name(self):
with patch("builtins.input", return_value=""):
with self.assertRaises(RuntimeError):
open_branch(None)
if __name__ == "__main__":
unittest.main()

View File

@@ -1,140 +1,206 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
import unittest
from unittest import mock
from unittest.mock import MagicMock, patch
"""
Unit tests for NixFlakeInstaller using unittest (no pytest).
Covers:
- Successful installation (exit_code == 0)
- Mandatory failure → SystemExit with correct code
- Optional failure (pkgmgr default) → no raise, but warning
- supports() behavior incl. PKGMGR_DISABLE_NIX_FLAKE_INSTALLER
"""
import io
import os
import shutil
import tempfile
import unittest
from contextlib import redirect_stdout
from unittest.mock import patch
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install.installers.nix_flake import NixFlakeInstaller
class DummyCtx:
"""Minimal context object to satisfy NixFlakeInstaller.run() / supports()."""
def __init__(self, identifier: str, repo_dir: str, preview: bool = False):
self.identifier = identifier
self.repo_dir = repo_dir
self.preview = preview
class TestNixFlakeInstaller(unittest.TestCase):
def setUp(self) -> None:
self.repo = {"repository": "package-manager"}
# Important: identifier "pkgmgr" triggers both "pkgmgr" and "default"
self.ctx = RepoContext(
repo=self.repo,
identifier="pkgmgr",
repo_dir="/tmp/repo",
repositories_base_dir="/tmp",
bin_dir="/bin",
all_repos=[self.repo],
no_verification=False,
preview=False,
quiet=False,
clone_mode="ssh",
update_dependencies=False,
)
self.installer = NixFlakeInstaller()
# Create a temporary repository directory with a flake.nix file
self._tmpdir = tempfile.mkdtemp(prefix="nix_flake_test_")
self.repo_dir = self._tmpdir
flake_path = os.path.join(self.repo_dir, "flake.nix")
with open(flake_path, "w", encoding="utf-8") as f:
f.write("{}\n")
@patch("pkgmgr.actions.install.installers.nix_flake.os.path.exists")
@patch("pkgmgr.actions.install.installers.nix_flake.shutil.which")
def test_supports_true_when_nix_and_flake_exist(
self,
mock_which: MagicMock,
mock_exists: MagicMock,
) -> None:
mock_which.return_value = "/usr/bin/nix"
mock_exists.return_value = True
# Ensure the disable env var is not set by default
os.environ.pop("PKGMGR_DISABLE_NIX_FLAKE_INSTALLER", None)
with patch.dict(os.environ, {"PKGMGR_DISABLE_NIX_FLAKE_INSTALLER": ""}, clear=False):
self.assertTrue(self.installer.supports(self.ctx))
def tearDown(self) -> None:
# Cleanup temporary directory
if os.path.isdir(self._tmpdir):
shutil.rmtree(self._tmpdir, ignore_errors=True)
mock_which.assert_called_once_with("nix")
mock_exists.assert_called_once_with(
os.path.join(self.ctx.repo_dir, self.installer.FLAKE_FILE)
)
def _enable_nix_in_module(self, which_patch):
"""Ensure shutil.which('nix') in nix_flake module returns a path."""
which_patch.return_value = "/usr/bin/nix"
@patch("pkgmgr.actions.install.installers.nix_flake.os.path.exists")
@patch("pkgmgr.actions.install.installers.nix_flake.shutil.which")
def test_supports_false_when_nix_missing(
self,
mock_which: MagicMock,
mock_exists: MagicMock,
) -> None:
mock_which.return_value = None
mock_exists.return_value = True # flake exists but nix is missing
with patch.dict(os.environ, {"PKGMGR_DISABLE_NIX_FLAKE_INSTALLER": ""}, clear=False):
self.assertFalse(self.installer.supports(self.ctx))
@patch("pkgmgr.actions.install.installers.nix_flake.os.path.exists")
@patch("pkgmgr.actions.install.installers.nix_flake.shutil.which")
def test_supports_false_when_disabled_via_env(
self,
mock_which: MagicMock,
mock_exists: MagicMock,
) -> None:
mock_which.return_value = "/usr/bin/nix"
mock_exists.return_value = True
with patch.dict(
os.environ,
{"PKGMGR_DISABLE_NIX_FLAKE_INSTALLER": "1"},
clear=False,
):
self.assertFalse(self.installer.supports(self.ctx))
@patch("pkgmgr.actions.install.installers.nix_flake.NixFlakeInstaller.supports")
@patch("pkgmgr.actions.install.installers.nix_flake.run_command")
def test_run_removes_old_profile_and_installs_outputs(
self,
mock_run_command: MagicMock,
mock_supports: MagicMock,
) -> None:
def test_nix_flake_run_success(self):
"""
run() should:
- remove the old profile
- install both 'pkgmgr' and 'default' outputs for identifier 'pkgmgr'
- call commands in the correct order
When os.system returns a successful exit code, the installer
should report success and not raise.
"""
mock_supports.return_value = True
ctx = DummyCtx(identifier="some-lib", repo_dir=self.repo_dir)
commands: list[str] = []
installer = NixFlakeInstaller()
def side_effect(cmd: str, cwd: str | None = None, preview: bool = False, **_: object) -> None:
commands.append(cmd)
buf = io.StringIO()
with patch(
"pkgmgr.actions.install.installers.nix_flake.shutil.which"
) as which_mock, patch(
"pkgmgr.actions.install.installers.nix_flake.os.system"
) as system_mock, redirect_stdout(buf):
self._enable_nix_in_module(which_mock)
mock_run_command.side_effect = side_effect
# Simulate os.system returning success (exit code 0)
system_mock.return_value = 0
with patch.dict(os.environ, {"PKGMGR_DISABLE_NIX_FLAKE_INSTALLER": ""}, clear=False):
self.installer.run(self.ctx)
# Sanity: supports() must be True
self.assertTrue(installer.supports(ctx))
remove_cmd = f"nix profile remove {self.installer.PROFILE_NAME} || true"
install_pkgmgr_cmd = f"nix profile install {self.ctx.repo_dir}#pkgmgr"
install_default_cmd = f"nix profile install {self.ctx.repo_dir}#default"
installer.run(ctx)
self.assertIn(remove_cmd, commands)
self.assertIn(install_pkgmgr_cmd, commands)
self.assertIn(install_default_cmd, commands)
out = buf.getvalue()
self.assertIn("[INFO] Running: nix profile install", out)
self.assertIn("Nix flake output 'default' successfully installed.", out)
self.assertEqual(commands[0], remove_cmd)
@patch("pkgmgr.actions.install.installers.nix_flake.shutil.which")
@patch("pkgmgr.actions.install.installers.nix_flake.run_command")
def test_ensure_old_profile_removed_ignores_systemexit(
self,
mock_run_command: MagicMock,
mock_which: MagicMock,
) -> None:
mock_which.return_value = "/usr/bin/nix"
def side_effect(cmd: str, cwd: str | None = None, preview: bool = False, **_: object) -> None:
raise SystemExit(1)
mock_run_command.side_effect = side_effect
self.installer._ensure_old_profile_removed(self.ctx)
remove_cmd = f"nix profile remove {self.installer.PROFILE_NAME} || true"
mock_run_command.assert_called_with(
remove_cmd,
cwd=self.ctx.repo_dir,
preview=self.ctx.preview,
# Ensure the nix command was actually invoked
system_mock.assert_called_with(
f"nix profile install {self.repo_dir}#default"
)
def test_nix_flake_run_mandatory_failure_raises(self):
"""
For a generic repository (identifier not pkgmgr/package-manager),
`default` is mandatory and a non-zero exit code should raise SystemExit
with the real exit code (e.g. 1, not 256).
"""
ctx = DummyCtx(identifier="some-lib", repo_dir=self.repo_dir)
installer = NixFlakeInstaller()
buf = io.StringIO()
with patch(
"pkgmgr.actions.install.installers.nix_flake.shutil.which"
) as which_mock, patch(
"pkgmgr.actions.install.installers.nix_flake.os.system"
) as system_mock, redirect_stdout(buf):
self._enable_nix_in_module(which_mock)
# Simulate os.system returning encoded status for exit code 1
# os.system encodes exit code as (exit_code << 8)
system_mock.return_value = 1 << 8
self.assertTrue(installer.supports(ctx))
with self.assertRaises(SystemExit) as cm:
installer.run(ctx)
# The real exit code should be 1 (not 256)
self.assertEqual(cm.exception.code, 1)
out = buf.getvalue()
self.assertIn("[INFO] Running: nix profile install", out)
self.assertIn("[Error] Failed to install Nix flake output 'default'", out)
self.assertIn("[Error] Command exited with code 1", out)
def test_nix_flake_run_optional_failure_does_not_raise(self):
"""
For the package-manager repository, the 'default' output is optional.
Failure to install it must not raise, but should log a warning instead.
"""
ctx = DummyCtx(identifier="pkgmgr", repo_dir=self.repo_dir)
installer = NixFlakeInstaller()
calls = []
def fake_system(cmd: str) -> int:
calls.append(cmd)
# First call (pkgmgr) → success
if len(calls) == 1:
return 0
# Second call (default) → failure (exit code 1 encoded)
return 1 << 8
buf = io.StringIO()
with patch(
"pkgmgr.actions.install.installers.nix_flake.shutil.which"
) as which_mock, patch(
"pkgmgr.actions.install.installers.nix_flake.os.system",
side_effect=fake_system,
), redirect_stdout(buf):
self._enable_nix_in_module(which_mock)
self.assertTrue(installer.supports(ctx))
# Optional failure must NOT raise
installer.run(ctx)
out = buf.getvalue()
# Both outputs should have been mentioned
self.assertIn(
"attempting to install profile outputs: pkgmgr, default", out
)
# First output ("pkgmgr") succeeded
self.assertIn(
"Nix flake output 'pkgmgr' successfully installed.", out
)
# Second output ("default") failed but did not raise
self.assertIn(
"[Error] Failed to install Nix flake output 'default'", out
)
self.assertIn("[Error] Command exited with code 1", out)
self.assertIn(
"Continuing despite failure to install optional output 'default'.",
out,
)
# Ensure we actually called os.system twice (pkgmgr and default)
self.assertEqual(len(calls), 2)
self.assertIn(
f"nix profile install {self.repo_dir}#pkgmgr",
calls[0],
)
self.assertIn(
f"nix profile install {self.repo_dir}#default",
calls[1],
)
def test_nix_flake_supports_respects_disable_env(self):
"""
PKGMGR_DISABLE_NIX_FLAKE_INSTALLER=1 must disable the installer,
even if flake.nix exists and nix is available.
"""
ctx = DummyCtx(identifier="pkgmgr", repo_dir=self.repo_dir)
installer = NixFlakeInstaller()
with patch(
"pkgmgr.actions.install.installers.nix_flake.shutil.which"
) as which_mock:
self._enable_nix_in_module(which_mock)
os.environ["PKGMGR_DISABLE_NIX_FLAKE_INSTALLER"] = "1"
self.assertFalse(installer.supports(ctx))
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,110 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import unittest
from pkgmgr.actions.mirror.git_remote import (
build_default_ssh_url,
determine_primary_remote_url,
)
from pkgmgr.actions.mirror.types import MirrorMap, Repository
class TestMirrorGitRemote(unittest.TestCase):
"""
Unit tests for SSH URL and primary remote selection logic.
"""
def test_build_default_ssh_url_without_port(self) -> None:
repo: Repository = {
"provider": "github.com",
"account": "kevinveenbirkenbach",
"repository": "package-manager",
}
url = build_default_ssh_url(repo)
self.assertEqual(
url,
"git@github.com:kevinveenbirkenbach/package-manager.git",
)
def test_build_default_ssh_url_with_port(self) -> None:
repo: Repository = {
"provider": "code.cymais.cloud",
"account": "kevinveenbirkenbach",
"repository": "pkgmgr",
"port": 2201,
}
url = build_default_ssh_url(repo)
self.assertEqual(
url,
"ssh://git@code.cymais.cloud:2201/kevinveenbirkenbach/pkgmgr.git",
)
def test_build_default_ssh_url_missing_fields_returns_none(self) -> None:
repo: Repository = {
"provider": "github.com",
"account": "kevinveenbirkenbach",
# "repository" fehlt absichtlich
}
url = build_default_ssh_url(repo)
self.assertIsNone(url)
def test_determine_primary_remote_url_prefers_origin_in_resolved_mirrors(
self,
) -> None:
repo: Repository = {
"provider": "github.com",
"account": "kevinveenbirkenbach",
"repository": "package-manager",
}
mirrors: MirrorMap = {
"origin": "git@github.com:kevinveenbirkenbach/package-manager.git",
"backup": "ssh://git@git.veen.world:2201/kevinveenbirkenbach/pkgmgr.git",
}
url = determine_primary_remote_url(repo, mirrors)
self.assertEqual(
url,
"git@github.com:kevinveenbirkenbach/package-manager.git",
)
def test_determine_primary_remote_url_uses_any_mirror_if_no_origin(self) -> None:
repo: Repository = {
"provider": "github.com",
"account": "kevinveenbirkenbach",
"repository": "package-manager",
}
mirrors: MirrorMap = {
"backup": "ssh://git@git.veen.world:2201/kevinveenbirkenbach/pkgmgr.git",
"mirror2": "ssh://git@code.cymais.cloud:2201/kevinveenbirkenbach/pkgmgr.git",
}
url = determine_primary_remote_url(repo, mirrors)
# Alphabetisch sortiert: backup, mirror2 → backup gewinnt
self.assertEqual(
url,
"ssh://git@git.veen.world:2201/kevinveenbirkenbach/pkgmgr.git",
)
def test_determine_primary_remote_url_falls_back_to_default_ssh(self) -> None:
repo: Repository = {
"provider": "github.com",
"account": "kevinveenbirkenbach",
"repository": "package-manager",
}
mirrors: MirrorMap = {}
url = determine_primary_remote_url(repo, mirrors)
self.assertEqual(
url,
"git@github.com:kevinveenbirkenbach/package-manager.git",
)
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,135 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import os
import tempfile
import unittest
from pkgmgr.actions.mirror.io import (
load_config_mirrors,
read_mirrors_file,
)
class TestMirrorIO(unittest.TestCase):
"""
Unit tests for pkgmgr.actions.mirror.io helpers.
"""
# ------------------------------------------------------------------
# load_config_mirrors
# ------------------------------------------------------------------
def test_load_config_mirrors_from_dict(self) -> None:
repo = {
"mirrors": {
"origin": "ssh://git@example.com/account/repo.git",
"backup": "ssh://git@backup/account/repo.git",
"empty": "",
"none": None,
}
}
mirrors = load_config_mirrors(repo)
self.assertEqual(
mirrors,
{
"origin": "ssh://git@example.com/account/repo.git",
"backup": "ssh://git@backup/account/repo.git",
},
)
def test_load_config_mirrors_from_list(self) -> None:
repo = {
"mirrors": [
{"name": "origin", "url": "ssh://git@example.com/account/repo.git"},
{"name": "backup", "url": "ssh://git@backup/account/repo.git"},
{"name": "", "url": "ssh://git@invalid/ignored.git"},
{"name": "missing-url"},
"not-a-dict",
]
}
mirrors = load_config_mirrors(repo)
self.assertEqual(
mirrors,
{
"origin": "ssh://git@example.com/account/repo.git",
"backup": "ssh://git@backup/account/repo.git",
},
)
def test_load_config_mirrors_empty_when_missing(self) -> None:
repo = {}
mirrors = load_config_mirrors(repo)
self.assertEqual(mirrors, {})
# ------------------------------------------------------------------
# read_mirrors_file
# ------------------------------------------------------------------
def test_read_mirrors_file_with_named_and_url_only_entries(self) -> None:
"""
Ensure that the MIRRORS file format is parsed correctly:
- 'name url' → exact name
- 'url' → auto name derived from netloc (host[:port]),
with numeric suffix if duplicated.
"""
with tempfile.TemporaryDirectory() as tmpdir:
mirrors_path = os.path.join(tmpdir, "MIRRORS")
content = "\n".join(
[
"# comment",
"",
"origin ssh://git@example.com/account/repo.git",
"https://github.com/kevinveenbirkenbach/package-manager",
"https://github.com/kevinveenbirkenbach/another-repo",
"ssh://git@git.veen.world:2201/kevinveenbirkenbach/pkgmgr.git",
]
)
with open(mirrors_path, "w", encoding="utf-8") as fh:
fh.write(content + "\n")
mirrors = read_mirrors_file(tmpdir)
# 'origin' is preserved as given
self.assertIn("origin", mirrors)
self.assertEqual(
mirrors["origin"],
"ssh://git@example.com/account/repo.git",
)
# Two GitHub URLs → auto names: github.com, github.com2
github_urls = {
mirrors.get("github.com"),
mirrors.get("github.com2"),
}
self.assertIn(
"https://github.com/kevinveenbirkenbach/package-manager",
github_urls,
)
self.assertIn(
"https://github.com/kevinveenbirkenbach/another-repo",
github_urls,
)
# SSH-URL mit User-Teil → netloc ist "git@git.veen.world:2201"
# → host = "git@git.veen.world"
self.assertIn("git@git.veen.world", mirrors)
self.assertEqual(
mirrors["git@git.veen.world"],
"ssh://git@git.veen.world:2201/kevinveenbirkenbach/pkgmgr.git",
)
def test_read_mirrors_file_missing_returns_empty(self) -> None:
with tempfile.TemporaryDirectory() as tmpdir:
mirrors = read_mirrors_file(tmpdir) # no MIRRORS file
self.assertEqual(mirrors, {})
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,59 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import unittest
from unittest.mock import patch
from pkgmgr.actions.mirror.setup_cmd import _probe_mirror
from pkgmgr.core.git import GitError
class TestMirrorSetupCmd(unittest.TestCase):
"""
Unit tests for the non-destructive remote probing logic in setup_cmd.
"""
@patch("pkgmgr.actions.mirror.setup_cmd.run_git")
def test_probe_mirror_success_returns_true_and_empty_message(
self,
mock_run_git,
) -> None:
"""
If run_git returns successfully, _probe_mirror must report (True, "").
"""
mock_run_git.return_value = "dummy-output"
ok, message = _probe_mirror(
"ssh://git@code.cymais.cloud:2201/kevinveenbirkenbach/pkgmgr.git",
"/tmp/some-repo",
)
self.assertTrue(ok)
self.assertEqual(message, "")
mock_run_git.assert_called_once()
@patch("pkgmgr.actions.mirror.setup_cmd.run_git")
def test_probe_mirror_failure_returns_false_and_error_message(
self,
mock_run_git,
) -> None:
"""
If run_git raises GitError, _probe_mirror must report (False, <message>),
and not re-raise the exception.
"""
mock_run_git.side_effect = GitError("Git command failed (simulated)")
ok, message = _probe_mirror(
"ssh://git@code.cymais.cloud:2201/kevinveenbirkenbach/pkgmgr.git",
"/tmp/some-repo",
)
self.assertFalse(ok)
self.assertIn("Git command failed", message)
mock_run_git.assert_called_once()
if __name__ == "__main__":
unittest.main()

View File

@@ -5,8 +5,9 @@ from unittest.mock import patch
from pkgmgr.core.git import GitError
from pkgmgr.actions.release.git_ops import (
ensure_clean_and_synced,
is_highest_version_tag,
run_git_command,
sync_branch_with_remote,
update_latest_tag,
)
@@ -14,12 +15,13 @@ from pkgmgr.actions.release.git_ops import (
class TestRunGitCommand(unittest.TestCase):
@patch("pkgmgr.actions.release.git_ops.subprocess.run")
def test_run_git_command_success(self, mock_run) -> None:
# No exception means success
run_git_command("git status")
mock_run.assert_called_once()
args, kwargs = mock_run.call_args
self.assertIn("git status", args[0])
self.assertTrue(kwargs.get("check"))
self.assertTrue(kwargs.get("capture_output"))
self.assertTrue(kwargs.get("text"))
@patch("pkgmgr.actions.release.git_ops.subprocess.run")
def test_run_git_command_failure_raises_git_error(self, mock_run) -> None:
@@ -36,58 +38,161 @@ class TestRunGitCommand(unittest.TestCase):
run_git_command("git status")
class TestSyncBranchWithRemote(unittest.TestCase):
@patch("pkgmgr.actions.release.git_ops.run_git_command")
def test_sync_branch_with_remote_skips_non_main_master(
self,
mock_run_git_command,
) -> None:
sync_branch_with_remote("feature/my-branch", preview=False)
mock_run_git_command.assert_not_called()
class TestEnsureCleanAndSynced(unittest.TestCase):
def _fake_run(self, cmd: str, *args, **kwargs):
class R:
def __init__(self, stdout: str = "", stderr: str = "", returncode: int = 0):
self.stdout = stdout
self.stderr = stderr
self.returncode = returncode
@patch("pkgmgr.actions.release.git_ops.run_git_command")
def test_sync_branch_with_remote_preview_on_main_does_not_run_git(
self,
mock_run_git_command,
) -> None:
sync_branch_with_remote("main", preview=True)
mock_run_git_command.assert_not_called()
# upstream detection
if "git rev-parse --abbrev-ref --symbolic-full-name @{u}" in cmd:
return R(stdout="origin/main")
@patch("pkgmgr.actions.release.git_ops.run_git_command")
def test_sync_branch_with_remote_main_runs_fetch_and_pull(
self,
mock_run_git_command,
) -> None:
sync_branch_with_remote("main", preview=False)
# fetch/pull should be invoked in real mode
if cmd == "git fetch --prune --tags":
return R(stdout="")
if cmd == "git pull --ff-only":
return R(stdout="Already up to date.")
calls = [c.args[0] for c in mock_run_git_command.call_args_list]
self.assertIn("git fetch origin", calls)
self.assertIn("git pull origin main", calls)
return R(stdout="")
@patch("pkgmgr.actions.release.git_ops.subprocess.run")
def test_ensure_clean_and_synced_preview_does_not_run_git_commands(self, mock_run) -> None:
def fake(cmd: str, *args, **kwargs):
class R:
def __init__(self, stdout: str = ""):
self.stdout = stdout
self.stderr = ""
self.returncode = 0
if "git rev-parse --abbrev-ref --symbolic-full-name @{u}" in cmd:
return R(stdout="origin/main")
return R(stdout="")
mock_run.side_effect = fake
ensure_clean_and_synced(preview=True)
called_cmds = [c.args[0] for c in mock_run.call_args_list]
self.assertTrue(any("git rev-parse" in c for c in called_cmds))
self.assertFalse(any(c == "git fetch --prune --tags" for c in called_cmds))
self.assertFalse(any(c == "git pull --ff-only" for c in called_cmds))
@patch("pkgmgr.actions.release.git_ops.subprocess.run")
def test_ensure_clean_and_synced_no_upstream_skips(self, mock_run) -> None:
def fake(cmd: str, *args, **kwargs):
class R:
def __init__(self, stdout: str = ""):
self.stdout = stdout
self.stderr = ""
self.returncode = 0
if "git rev-parse --abbrev-ref --symbolic-full-name @{u}" in cmd:
return R(stdout="") # no upstream
return R(stdout="")
mock_run.side_effect = fake
ensure_clean_and_synced(preview=False)
called_cmds = [c.args[0] for c in mock_run.call_args_list]
self.assertTrue(any("git rev-parse" in c for c in called_cmds))
self.assertFalse(any(c == "git fetch --prune --tags" for c in called_cmds))
self.assertFalse(any(c == "git pull --ff-only" for c in called_cmds))
@patch("pkgmgr.actions.release.git_ops.subprocess.run")
def test_ensure_clean_and_synced_real_runs_fetch_and_pull(self, mock_run) -> None:
mock_run.side_effect = self._fake_run
ensure_clean_and_synced(preview=False)
called_cmds = [c.args[0] for c in mock_run.call_args_list]
self.assertIn("git fetch origin --prune --tags --force", called_cmds)
self.assertIn("git pull --ff-only", called_cmds)
class TestIsHighestVersionTag(unittest.TestCase):
@patch("pkgmgr.actions.release.git_ops.subprocess.run")
def test_is_highest_version_tag_no_tags_true(self, mock_run) -> None:
def fake(cmd: str, *args, **kwargs):
class R:
def __init__(self, stdout: str = ""):
self.stdout = stdout
self.stderr = ""
self.returncode = 0
if "git tag --list" in cmd and "'v*'" in cmd:
return R(stdout="") # no tags
return R(stdout="")
mock_run.side_effect = fake
self.assertTrue(is_highest_version_tag("v1.0.0"))
# ensure at least the list command was queried
called_cmds = [c.args[0] for c in mock_run.call_args_list]
self.assertTrue(any("git tag --list" in c for c in called_cmds))
@patch("pkgmgr.actions.release.git_ops.subprocess.run")
def test_is_highest_version_tag_compares_sort_v(self, mock_run) -> None:
"""
This test is aligned with the CURRENT implementation:
return tag >= latest
which is a *string comparison*, not a semantic version compare.
Therefore, a candidate like v1.2.0 is lexicographically >= v1.10.0
(because '2' > '1' at the first differing char after 'v1.').
"""
def fake(cmd: str, *args, **kwargs):
class R:
def __init__(self, stdout: str = ""):
self.stdout = stdout
self.stderr = ""
self.returncode = 0
if cmd.strip() == "git tag --list 'v*'":
return R(stdout="v1.0.0\nv1.2.0\nv1.10.0\n")
if "git tag --list 'v*'" in cmd and "sort -V" in cmd and "tail -n1" in cmd:
return R(stdout="v1.10.0")
return R(stdout="")
mock_run.side_effect = fake
# With the current implementation (string >=), both of these are True.
self.assertTrue(is_highest_version_tag("v1.10.0"))
self.assertTrue(is_highest_version_tag("v1.2.0"))
# And a clearly lexicographically smaller candidate should be False.
# Example: "v1.0.0" < "v1.10.0"
self.assertFalse(is_highest_version_tag("v1.0.0"))
# Ensure both capture commands were executed
called_cmds = [c.args[0] for c in mock_run.call_args_list]
self.assertTrue(any(cmd == "git tag --list 'v*'" for cmd in called_cmds))
self.assertTrue(any("sort -V" in cmd and "tail -n1" in cmd for cmd in called_cmds))
class TestUpdateLatestTag(unittest.TestCase):
@patch("pkgmgr.actions.release.git_ops.run_git_command")
def test_update_latest_tag_preview_does_not_call_git(
self,
mock_run_git_command,
) -> None:
def test_update_latest_tag_preview_does_not_call_git(self, mock_run_git_command) -> None:
update_latest_tag("v1.2.3", preview=True)
mock_run_git_command.assert_not_called()
@patch("pkgmgr.actions.release.git_ops.run_git_command")
def test_update_latest_tag_real_calls_git_with_dereference_and_message(
self,
mock_run_git_command,
) -> None:
def test_update_latest_tag_real_calls_git(self, mock_run_git_command) -> None:
update_latest_tag("v1.2.3", preview=False)
calls = [c.args[0] for c in mock_run_git_command.call_args_list]
# Must dereference the tag object and create an annotated tag with message
self.assertIn(
'git tag -f -a latest v1.2.3^{} -m "Floating latest tag for v1.2.3"',
calls,
)
self.assertIn("git push origin latest --force", calls)
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,14 @@
from __future__ import annotations
import unittest
class TestReleasePackageInit(unittest.TestCase):
def test_release_is_reexported(self) -> None:
from pkgmgr.actions.release import release # noqa: F401
self.assertTrue(callable(release))
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,50 @@
from __future__ import annotations
import unittest
from unittest.mock import patch
from pkgmgr.actions.release.prompts import (
confirm_proceed_release,
should_delete_branch,
)
class TestShouldDeleteBranch(unittest.TestCase):
def test_force_true_skips_prompt_and_returns_true(self) -> None:
self.assertTrue(should_delete_branch(force=True))
@patch("pkgmgr.actions.release.prompts.sys.stdin.isatty", return_value=False)
def test_non_interactive_returns_false(self, _mock_isatty) -> None:
self.assertFalse(should_delete_branch(force=False))
@patch("pkgmgr.actions.release.prompts.sys.stdin.isatty", return_value=True)
@patch("builtins.input", return_value="y")
def test_interactive_yes_returns_true(self, _mock_input, _mock_isatty) -> None:
self.assertTrue(should_delete_branch(force=False))
@patch("pkgmgr.actions.release.prompts.sys.stdin.isatty", return_value=True)
@patch("builtins.input", return_value="N")
def test_interactive_no_returns_false(self, _mock_input, _mock_isatty) -> None:
self.assertFalse(should_delete_branch(force=False))
class TestConfirmProceedRelease(unittest.TestCase):
@patch("builtins.input", return_value="y")
def test_confirm_yes(self, _mock_input) -> None:
self.assertTrue(confirm_proceed_release())
@patch("builtins.input", return_value="no")
def test_confirm_no(self, _mock_input) -> None:
self.assertFalse(confirm_proceed_release())
@patch("builtins.input", side_effect=EOFError)
def test_confirm_eof_returns_false(self, _mock_input) -> None:
self.assertFalse(confirm_proceed_release())
@patch("builtins.input", side_effect=KeyboardInterrupt)
def test_confirm_keyboard_interrupt_returns_false(self, _mock_input) -> None:
self.assertFalse(confirm_proceed_release())
if __name__ == "__main__":
unittest.main()

View File

@@ -1,155 +0,0 @@
from __future__ import annotations
import unittest
from unittest.mock import patch
from pkgmgr.core.version.semver import SemVer
from pkgmgr.actions.release import release
class TestReleaseOrchestration(unittest.TestCase):
def test_release_happy_path_uses_helpers_and_git(self) -> None:
with patch("pkgmgr.actions.release.sys.stdin.isatty", return_value=False), \
patch("pkgmgr.actions.release.determine_current_version") as mock_determine_current_version, \
patch("pkgmgr.actions.release.bump_semver") as mock_bump_semver, \
patch("pkgmgr.actions.release.update_pyproject_version") as mock_update_pyproject, \
patch("pkgmgr.actions.release.update_changelog") as mock_update_changelog, \
patch("pkgmgr.actions.release.get_current_branch", return_value="develop") as mock_get_current_branch, \
patch("pkgmgr.actions.release.update_flake_version") as mock_update_flake, \
patch("pkgmgr.actions.release.update_pkgbuild_version") as mock_update_pkgbuild, \
patch("pkgmgr.actions.release.update_spec_version") as mock_update_spec, \
patch("pkgmgr.actions.release.update_debian_changelog") as mock_update_debian_changelog, \
patch("pkgmgr.actions.release.update_spec_changelog") as mock_update_spec_changelog, \
patch("pkgmgr.actions.release.run_git_command") as mock_run_git_command, \
patch("pkgmgr.actions.release.sync_branch_with_remote") as mock_sync_branch, \
patch("pkgmgr.actions.release.update_latest_tag") as mock_update_latest_tag:
mock_determine_current_version.return_value = SemVer(1, 2, 3)
mock_bump_semver.return_value = SemVer(1, 2, 4)
release(
pyproject_path="pyproject.toml",
changelog_path="CHANGELOG.md",
release_type="patch",
message="Test release",
preview=False,
)
# Current version + bump
mock_determine_current_version.assert_called_once()
mock_bump_semver.assert_called_once()
args, kwargs = mock_bump_semver.call_args
self.assertEqual(args[0], SemVer(1, 2, 3))
self.assertEqual(args[1], "patch")
self.assertEqual(kwargs, {})
# pyproject update
mock_update_pyproject.assert_called_once()
args, kwargs = mock_update_pyproject.call_args
self.assertEqual(args[0], "pyproject.toml")
self.assertEqual(args[1], "1.2.4")
self.assertEqual(kwargs.get("preview"), False)
# changelog update (Projekt)
mock_update_changelog.assert_called_once()
args, kwargs = mock_update_changelog.call_args
self.assertEqual(args[0], "CHANGELOG.md")
self.assertEqual(args[1], "1.2.4")
self.assertEqual(kwargs.get("message"), "Test release")
self.assertEqual(kwargs.get("preview"), False)
# Additional packaging helpers called with preview=False
mock_update_flake.assert_called_once()
self.assertEqual(mock_update_flake.call_args[1].get("preview"), False)
mock_update_pkgbuild.assert_called_once()
self.assertEqual(mock_update_pkgbuild.call_args[1].get("preview"), False)
mock_update_spec.assert_called_once()
self.assertEqual(mock_update_spec.call_args[1].get("preview"), False)
mock_update_debian_changelog.assert_called_once()
self.assertEqual(
mock_update_debian_changelog.call_args[1].get("preview"),
False,
)
# Fedora / RPM %changelog helper
mock_update_spec_changelog.assert_called_once()
self.assertEqual(
mock_update_spec_changelog.call_args[1].get("preview"),
False,
)
# Git operations
mock_get_current_branch.assert_called_once()
self.assertEqual(mock_get_current_branch.return_value, "develop")
git_calls = [c.args[0] for c in mock_run_git_command.call_args_list]
self.assertIn('git commit -am "Release version 1.2.4"', git_calls)
self.assertIn('git tag -a v1.2.4 -m "Test release"', git_calls)
self.assertIn("git push origin develop", git_calls)
self.assertIn("git push origin --tags", git_calls)
# Branch sync & latest tag update
mock_sync_branch.assert_called_once_with("develop", preview=False)
mock_update_latest_tag.assert_called_once_with("v1.2.4", preview=False)
def test_release_preview_mode_skips_git_and_uses_preview_flag(self) -> None:
with patch("pkgmgr.actions.release.determine_current_version") as mock_determine_current_version, \
patch("pkgmgr.actions.release.bump_semver") as mock_bump_semver, \
patch("pkgmgr.actions.release.update_pyproject_version") as mock_update_pyproject, \
patch("pkgmgr.actions.release.update_changelog") as mock_update_changelog, \
patch("pkgmgr.actions.release.get_current_branch", return_value="develop") as mock_get_current_branch, \
patch("pkgmgr.actions.release.update_flake_version") as mock_update_flake, \
patch("pkgmgr.actions.release.update_pkgbuild_version") as mock_update_pkgbuild, \
patch("pkgmgr.actions.release.update_spec_version") as mock_update_spec, \
patch("pkgmgr.actions.release.update_debian_changelog") as mock_update_debian_changelog, \
patch("pkgmgr.actions.release.update_spec_changelog") as mock_update_spec_changelog, \
patch("pkgmgr.actions.release.run_git_command") as mock_run_git_command, \
patch("pkgmgr.actions.release.sync_branch_with_remote") as mock_sync_branch, \
patch("pkgmgr.actions.release.update_latest_tag") as mock_update_latest_tag:
mock_determine_current_version.return_value = SemVer(1, 2, 3)
mock_bump_semver.return_value = SemVer(1, 2, 4)
release(
pyproject_path="pyproject.toml",
changelog_path="CHANGELOG.md",
release_type="patch",
message="Preview release",
preview=True,
)
# All update helpers must be called with preview=True
mock_update_pyproject.assert_called_once()
self.assertTrue(mock_update_pyproject.call_args[1].get("preview"))
mock_update_changelog.assert_called_once()
self.assertTrue(mock_update_changelog.call_args[1].get("preview"))
mock_update_flake.assert_called_once()
self.assertTrue(mock_update_flake.call_args[1].get("preview"))
mock_update_pkgbuild.assert_called_once()
self.assertTrue(mock_update_pkgbuild.call_args[1].get("preview"))
mock_update_spec.assert_called_once()
self.assertTrue(mock_update_spec.call_args[1].get("preview"))
mock_update_debian_changelog.assert_called_once()
self.assertTrue(mock_update_debian_changelog.call_args[1].get("preview"))
# Fedora / RPM spec changelog helper in preview mode
mock_update_spec_changelog.assert_called_once()
self.assertTrue(mock_update_spec_changelog.call_args[1].get("preview"))
# In preview mode no real git commands must be executed
mock_run_git_command.assert_not_called()
# Branch sync is still invoked (with preview=True internally),
# and latest tag is only announced in preview mode
mock_sync_branch.assert_called_once_with("develop", preview=True)
mock_update_latest_tag.assert_called_once_with("v1.2.4", preview=True)
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,59 @@
from __future__ import annotations
import unittest
from unittest.mock import patch
from pkgmgr.actions.release.workflow import release
class TestWorkflowReleaseEntryPoint(unittest.TestCase):
@patch("pkgmgr.actions.release.workflow._release_impl")
def test_release_preview_calls_impl_preview_only(self, mock_impl) -> None:
release(preview=True, force=False, close=False)
mock_impl.assert_called_once()
kwargs = mock_impl.call_args.kwargs
self.assertTrue(kwargs["preview"])
self.assertFalse(kwargs["force"])
@patch("pkgmgr.actions.release.workflow._release_impl")
@patch("pkgmgr.actions.release.workflow.sys.stdin.isatty", return_value=False)
def test_release_non_interactive_runs_real_without_confirmation(self, _mock_isatty, mock_impl) -> None:
release(preview=False, force=False, close=False)
mock_impl.assert_called_once()
kwargs = mock_impl.call_args.kwargs
self.assertFalse(kwargs["preview"])
@patch("pkgmgr.actions.release.workflow._release_impl")
def test_release_force_runs_real_without_confirmation(self, mock_impl) -> None:
release(preview=False, force=True, close=False)
mock_impl.assert_called_once()
kwargs = mock_impl.call_args.kwargs
self.assertFalse(kwargs["preview"])
self.assertTrue(kwargs["force"])
@patch("pkgmgr.actions.release.workflow._release_impl")
@patch("pkgmgr.actions.release.workflow.confirm_proceed_release", return_value=False)
@patch("pkgmgr.actions.release.workflow.sys.stdin.isatty", return_value=True)
def test_release_interactive_decline_runs_only_preview(self, _mock_isatty, _mock_confirm, mock_impl) -> None:
release(preview=False, force=False, close=False)
# interactive path: preview first, then decline => only one call
self.assertEqual(mock_impl.call_count, 1)
self.assertTrue(mock_impl.call_args_list[0].kwargs["preview"])
@patch("pkgmgr.actions.release.workflow._release_impl")
@patch("pkgmgr.actions.release.workflow.confirm_proceed_release", return_value=True)
@patch("pkgmgr.actions.release.workflow.sys.stdin.isatty", return_value=True)
def test_release_interactive_accept_runs_preview_then_real(self, _mock_isatty, _mock_confirm, mock_impl) -> None:
release(preview=False, force=False, close=False)
self.assertEqual(mock_impl.call_count, 2)
self.assertTrue(mock_impl.call_args_list[0].kwargs["preview"])
self.assertFalse(mock_impl.call_args_list[1].kwargs["preview"])
if __name__ == "__main__":
unittest.main()

View File

@@ -1,146 +0,0 @@
from __future__ import annotations
import unittest
from unittest.mock import patch
from pkgmgr.actions.branch import open_branch
from pkgmgr.core.git import GitError
class TestOpenBranch(unittest.TestCase):
@patch("pkgmgr.actions.branch.run_git")
def test_open_branch_with_explicit_name_and_default_base(self, mock_run_git) -> None:
"""
open_branch(name, base='main') should:
- resolve base branch (prefers 'main', falls back to 'master')
- fetch origin
- checkout resolved base
- pull resolved base
- create new branch
- push with upstream
"""
mock_run_git.return_value = ""
open_branch(name="feature/test", base_branch="main", cwd="/repo")
# We expect a specific sequence of Git calls.
expected_calls = [
(["rev-parse", "--verify", "main"], "/repo"),
(["fetch", "origin"], "/repo"),
(["checkout", "main"], "/repo"),
(["pull", "origin", "main"], "/repo"),
(["checkout", "-b", "feature/test"], "/repo"),
(["push", "-u", "origin", "feature/test"], "/repo"),
]
self.assertEqual(mock_run_git.call_count, len(expected_calls))
for call, (args_expected, cwd_expected) in zip(
mock_run_git.call_args_list, expected_calls
):
args, kwargs = call
self.assertEqual(args[0], args_expected)
self.assertEqual(kwargs.get("cwd"), cwd_expected)
@patch("builtins.input", return_value="feature/interactive")
@patch("pkgmgr.actions.branch.run_git")
def test_open_branch_prompts_for_name_if_missing(
self,
mock_run_git,
mock_input,
) -> None:
"""
If name is None/empty, open_branch should prompt via input()
and still perform the full Git sequence on the resolved base.
"""
mock_run_git.return_value = ""
open_branch(name=None, base_branch="develop", cwd="/repo")
# Ensure we asked for input exactly once
mock_input.assert_called_once()
expected_calls = [
(["rev-parse", "--verify", "develop"], "/repo"),
(["fetch", "origin"], "/repo"),
(["checkout", "develop"], "/repo"),
(["pull", "origin", "develop"], "/repo"),
(["checkout", "-b", "feature/interactive"], "/repo"),
(["push", "-u", "origin", "feature/interactive"], "/repo"),
]
self.assertEqual(mock_run_git.call_count, len(expected_calls))
for call, (args_expected, cwd_expected) in zip(
mock_run_git.call_args_list, expected_calls
):
args, kwargs = call
self.assertEqual(args[0], args_expected)
self.assertEqual(kwargs.get("cwd"), cwd_expected)
@patch("pkgmgr.actions.branch.run_git")
def test_open_branch_raises_runtimeerror_on_fetch_failure(self, mock_run_git) -> None:
"""
If a GitError occurs on fetch, open_branch should raise a RuntimeError
with a helpful message.
"""
def side_effect(args, cwd="."):
# First call: base resolution (rev-parse) should succeed
if args == ["rev-parse", "--verify", "main"]:
return ""
# Second call: fetch should fail
if args == ["fetch", "origin"]:
raise GitError("simulated fetch failure")
return ""
mock_run_git.side_effect = side_effect
with self.assertRaises(RuntimeError) as cm:
open_branch(name="feature/fail", base_branch="main", cwd="/repo")
msg = str(cm.exception)
self.assertIn("Failed to fetch from origin", msg)
self.assertIn("simulated fetch failure", msg)
@patch("pkgmgr.actions.branch.run_git")
def test_open_branch_uses_fallback_master_if_main_missing(self, mock_run_git) -> None:
"""
If the preferred base (e.g. 'main') does not exist, open_branch should
fall back to the fallback base (default: 'master').
"""
def side_effect(args, cwd="."):
# First: rev-parse main -> fails
if args == ["rev-parse", "--verify", "main"]:
raise GitError("main does not exist")
# Second: rev-parse master -> succeeds
if args == ["rev-parse", "--verify", "master"]:
return ""
# Then normal flow on master
return ""
mock_run_git.side_effect = side_effect
open_branch(name="feature/fallback", base_branch="main", cwd="/repo")
expected_calls = [
(["rev-parse", "--verify", "main"], "/repo"),
(["rev-parse", "--verify", "master"], "/repo"),
(["fetch", "origin"], "/repo"),
(["checkout", "master"], "/repo"),
(["pull", "origin", "master"], "/repo"),
(["checkout", "-b", "feature/fallback"], "/repo"),
(["push", "-u", "origin", "feature/fallback"], "/repo"),
]
self.assertEqual(mock_run_git.call_count, len(expected_calls))
for call, (args_expected, cwd_expected) in zip(
mock_run_git.call_args_list, expected_calls
):
args, kwargs = call
self.assertEqual(args[0], args_expected)
self.assertEqual(kwargs.get("cwd"), cwd_expected)
if __name__ == "__main__":
unittest.main()

View File

@@ -1,112 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Unit tests for the `pkgmgr branch` CLI wiring.
These tests verify that:
- The argument parser creates the correct structure for
`branch open` and `branch close`.
- `handle_branch` calls the corresponding helper functions
with the expected arguments (including base branch and cwd).
"""
from __future__ import annotations
import unittest
from unittest.mock import patch
from pkgmgr.cli.parser import create_parser
from pkgmgr.cli.commands.branch import handle_branch
class TestBranchCLI(unittest.TestCase):
"""
Tests for the branch subcommands implemented in cli.
"""
def _create_parser(self):
"""
Create the top-level parser with a minimal description.
"""
return create_parser("pkgmgr test parser")
@patch("pkgmgr.cli.commands.branch.open_branch")
def test_branch_open_with_name_and_base(self, mock_open_branch):
"""
Ensure that `pkgmgr branch open <name> --base <branch>` calls
open_branch() with the correct parameters.
"""
parser = self._create_parser()
args = parser.parse_args(
["branch", "open", "feature/test-branch", "--base", "develop"]
)
# Sanity check: parser wiring
self.assertEqual(args.command, "branch")
self.assertEqual(args.subcommand, "open")
self.assertEqual(args.name, "feature/test-branch")
self.assertEqual(args.base, "develop")
# ctx is currently unused by handle_branch, so we can pass None
handle_branch(args, ctx=None)
mock_open_branch.assert_called_once()
_args, kwargs = mock_open_branch.call_args
self.assertEqual(kwargs.get("name"), "feature/test-branch")
self.assertEqual(kwargs.get("base_branch"), "develop")
self.assertEqual(kwargs.get("cwd"), ".")
@patch("pkgmgr.cli.commands.branch.close_branch")
def test_branch_close_with_name_and_base(self, mock_close_branch):
"""
Ensure that `pkgmgr branch close <name> --base <branch>` calls
close_branch() with the correct parameters.
"""
parser = self._create_parser()
args = parser.parse_args(
["branch", "close", "feature/old-branch", "--base", "main"]
)
# Sanity check: parser wiring
self.assertEqual(args.command, "branch")
self.assertEqual(args.subcommand, "close")
self.assertEqual(args.name, "feature/old-branch")
self.assertEqual(args.base, "main")
handle_branch(args, ctx=None)
mock_close_branch.assert_called_once()
_args, kwargs = mock_close_branch.call_args
self.assertEqual(kwargs.get("name"), "feature/old-branch")
self.assertEqual(kwargs.get("base_branch"), "main")
self.assertEqual(kwargs.get("cwd"), ".")
@patch("pkgmgr.cli.commands.branch.close_branch")
def test_branch_close_without_name_uses_none(self, mock_close_branch):
"""
Ensure that `pkgmgr branch close` without a name passes name=None
into close_branch(), leaving branch resolution to the helper.
"""
parser = self._create_parser()
args = parser.parse_args(["branch", "close"])
# Parser wiring: no name → None
self.assertEqual(args.command, "branch")
self.assertEqual(args.subcommand, "close")
self.assertIsNone(args.name)
handle_branch(args, ctx=None)
mock_close_branch.assert_called_once()
_args, kwargs = mock_close_branch.call_args
self.assertIsNone(kwargs.get("name"))
self.assertEqual(kwargs.get("base_branch"), "main")
self.assertEqual(kwargs.get("cwd"), ".")
if __name__ == "__main__":
unittest.main()

View File

@@ -22,6 +22,10 @@ class TestCliBranch(unittest.TestCase):
user_config_path="/tmp/config.yaml",
)
# ------------------------------------------------------------------
# open subcommand
# ------------------------------------------------------------------
@patch("pkgmgr.cli.commands.branch.open_branch")
def test_handle_branch_open_forwards_args_to_open_branch(self, mock_open_branch) -> None:
"""
@@ -73,13 +77,15 @@ class TestCliBranch(unittest.TestCase):
@patch("pkgmgr.cli.commands.branch.close_branch")
def test_handle_branch_close_forwards_args_to_close_branch(self, mock_close_branch) -> None:
"""
handle_branch('close') should call close_branch with name, base and cwd='.'.
handle_branch('close') should call close_branch with name, base,
cwd='.' and force=False by default.
"""
args = SimpleNamespace(
command="branch",
subcommand="close",
name="feature/cli-close",
base="develop",
force=False,
)
ctx = self._dummy_ctx()
@@ -91,6 +97,7 @@ class TestCliBranch(unittest.TestCase):
self.assertEqual(call_kwargs.get("name"), "feature/cli-close")
self.assertEqual(call_kwargs.get("base_branch"), "develop")
self.assertEqual(call_kwargs.get("cwd"), ".")
self.assertFalse(call_kwargs.get("force"))
@patch("pkgmgr.cli.commands.branch.close_branch")
def test_handle_branch_close_uses_default_base_when_not_set(self, mock_close_branch) -> None:
@@ -103,6 +110,7 @@ class TestCliBranch(unittest.TestCase):
subcommand="close",
name=None,
base="main",
force=False,
)
ctx = self._dummy_ctx()
@@ -114,6 +122,113 @@ class TestCliBranch(unittest.TestCase):
self.assertIsNone(call_kwargs.get("name"))
self.assertEqual(call_kwargs.get("base_branch"), "main")
self.assertEqual(call_kwargs.get("cwd"), ".")
self.assertFalse(call_kwargs.get("force"))
@patch("pkgmgr.cli.commands.branch.close_branch")
def test_handle_branch_close_with_force_true(self, mock_close_branch) -> None:
"""
handle_branch('close') should pass force=True when the args specify it.
"""
args = SimpleNamespace(
command="branch",
subcommand="close",
name="feature/cli-close-force",
base="main",
force=True,
)
ctx = self._dummy_ctx()
handle_branch(args, ctx)
mock_close_branch.assert_called_once()
_, call_kwargs = mock_close_branch.call_args
self.assertEqual(call_kwargs.get("name"), "feature/cli-close-force")
self.assertEqual(call_kwargs.get("base_branch"), "main")
self.assertEqual(call_kwargs.get("cwd"), ".")
self.assertTrue(call_kwargs.get("force"))
# ------------------------------------------------------------------
# drop subcommand
# ------------------------------------------------------------------
@patch("pkgmgr.cli.commands.branch.drop_branch")
def test_handle_branch_drop_forwards_args_to_drop_branch(self, mock_drop_branch) -> None:
"""
handle_branch('drop') should call drop_branch with name, base,
cwd='.' and force=False by default.
"""
args = SimpleNamespace(
command="branch",
subcommand="drop",
name="feature/cli-drop",
base="develop",
force=False,
)
ctx = self._dummy_ctx()
handle_branch(args, ctx)
mock_drop_branch.assert_called_once()
_, call_kwargs = mock_drop_branch.call_args
self.assertEqual(call_kwargs.get("name"), "feature/cli-drop")
self.assertEqual(call_kwargs.get("base_branch"), "develop")
self.assertEqual(call_kwargs.get("cwd"), ".")
self.assertFalse(call_kwargs.get("force"))
@patch("pkgmgr.cli.commands.branch.drop_branch")
def test_handle_branch_drop_uses_default_base_when_not_set(self, mock_drop_branch) -> None:
"""
If --base is not passed for 'drop', argparse gives base='main'
(default), and handle_branch should propagate that to drop_branch.
"""
args = SimpleNamespace(
command="branch",
subcommand="drop",
name=None,
base="main",
force=False,
)
ctx = self._dummy_ctx()
handle_branch(args, ctx)
mock_drop_branch.assert_called_once()
_, call_kwargs = mock_drop_branch.call_args
self.assertIsNone(call_kwargs.get("name"))
self.assertEqual(call_kwargs.get("base_branch"), "main")
self.assertEqual(call_kwargs.get("cwd"), ".")
self.assertFalse(call_kwargs.get("force"))
@patch("pkgmgr.cli.commands.branch.drop_branch")
def test_handle_branch_drop_with_force_true(self, mock_drop_branch) -> None:
"""
handle_branch('drop') should pass force=True when the args specify it.
"""
args = SimpleNamespace(
command="branch",
subcommand="drop",
name="feature/cli-drop-force",
base="main",
force=True,
)
ctx = self._dummy_ctx()
handle_branch(args, ctx)
mock_drop_branch.assert_called_once()
_, call_kwargs = mock_drop_branch.call_args
self.assertEqual(call_kwargs.get("name"), "feature/cli-drop-force")
self.assertEqual(call_kwargs.get("base_branch"), "main")
self.assertEqual(call_kwargs.get("cwd"), ".")
self.assertTrue(call_kwargs.get("force"))
# ------------------------------------------------------------------
# unknown subcommand
# ------------------------------------------------------------------
def test_handle_branch_unknown_subcommand_exits_with_code_2(self) -> None:
"""