Compare commits

...

50 Commits

Author SHA1 Message Date
Kevin Veen-Birkenbach
7cfd7e8d5c Release version 1.6.3
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-14 13:39:52 +01:00
Kevin Veen-Birkenbach
84b6c71748 test(integration): add unittest-based repository layout contract test
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
- Add integration test using unittest to verify canonical repository paths
- Assert pkgmgr repository satisfies template layout (packaging, changelog, metadata)
- Use real filesystem without mocks or pytest dependencies

https://chatgpt.com/share/693eaa75-98f0-800f-adca-439555f84154
2025-12-14 13:26:18 +01:00
Kevin Veen-Birkenbach
db9aaf920e refactor(release,version): centralize repository path resolution and validate template layout
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
- Introduce RepoPaths resolver as single source of truth for repository file locations
- Update release workflow to use resolved packaging and changelog paths
- Update version readers to rely on the shared path resolver
- Add integration test asserting pkgmgr repository satisfies canonical template layout

https://chatgpt.com/share/693eaa75-98f0-800f-adca-439555f84154
2025-12-14 13:15:41 +01:00
Kevin Veen-Birkenbach
69d28a461d Release version 1.6.2
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-14 12:58:35 +01:00
Kevin Veen-Birkenbach
03e414cc9f fix(version): add tomli fallback for Python < 3.11
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
- Add conditional runtime dependency on tomli for Python < 3.11
- Fix crash on CentOS / Python 3.9 when reading pyproject.toml
- Ensure version command works consistently across distros

https://chatgpt.com/share/693ea1cb-41a0-800f-b4dc-4ff507eb60c6
2025-12-14 12:38:43 +01:00
Kevin Veen-Birkenbach
7674762c9a feat(version): show installed pkgmgr version when no repo is selected
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
- Add installed version detection for Python environments and Nix profiles
- Display pkgmgr’s own installed version when run outside a repository
- Improve version command output to include installed vs source versions
- Prefer editable venv setup as default in Makefile setup target

https://chatgpt.com/share/693e9f02-9b34-800f-8eeb-c7c776b3faa7
2025-12-14 12:26:50 +01:00
Kevin Veen-Birkenbach
a47de15e42 Release version 1.6.1
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-14 12:01:52 +01:00
Kevin Veen-Birkenbach
37f3057d31 fix(nix): resolve Ruff F821 via TYPE_CHECKING and stabilize NixFlakeInstaller tests
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
CI / test-unit (push) Has been cancelled
CI / test-integration (push) Has been cancelled
CI / test-env-virtual (push) Has been cancelled
CI / test-env-nix (push) Has been cancelled
CI / test-e2e (push) Has been cancelled
CI / test-virgin-user (push) Has been cancelled
CI / test-virgin-root (push) Has been cancelled
CI / codesniffer-shellcheck (push) Has been cancelled
CI / codesniffer-ruff (push) Has been cancelled
- Add TYPE_CHECKING imports for RepoContext and CommandRunner to avoid runtime deps
- Fix Ruff F821 undefined-name errors in nix installer modules
- Refactor legacy NixFlakeInstaller unit tests to mock subprocess.run directly
- Remove obsolete run_cmd_mock usage and assert install calls via subprocess calls
- Ensure tests run without realtime waits or external nix dependencies

https://chatgpt.com/share/693e925d-a79c-800f-b0b6-92b8ba260b11
2025-12-14 11:43:33 +01:00
Kevin Veen-Birkenbach
d55c8d3726 refactor(nix): split NixFlakeInstaller into atomic modules and add GitHub 403 retry handling
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
- Move Nix flake installer into installers/nix/ with atomic components
  (installer, runner, profile, retry, types)
- Preserve legacy behavior and semantics of NixFlakeInstaller
- Add GitHub API 403 rate-limit retry with Fibonacci backoff + jitter
- Update all imports to new nix module path
- Rename legacy unit tests and adapt patches to new structure
- Add unit test for simulated GitHub 403 retry without realtime sleeping

https://chatgpt.com/share/693e925d-a79c-800f-b0b6-92b8ba260b11
2025-12-14 11:32:48 +01:00
Kevin Veen-Birkenbach
3990560cd7 Release version 1.6.0
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-14 10:51:40 +01:00
Kevin Veen-Birkenbach
d1e5a71f77 Merge branch 'feature/mirror-provision'
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-14 10:45:51 +01:00
Kevin Veen-Birkenbach
d59dc8ad53 fix(cli): route update exclusively through UpdateManager
Some checks failed
CI / test-unit (push) Has been cancelled
CI / test-integration (push) Has been cancelled
CI / test-env-virtual (push) Has been cancelled
CI / test-env-nix (push) Has been cancelled
CI / test-e2e (push) Has been cancelled
CI / test-virgin-user (push) Has been cancelled
CI / test-virgin-root (push) Has been cancelled
CI / codesniffer-shellcheck (push) Has been cancelled
CI / codesniffer-ruff (push) Has been cancelled
* Remove `update` from repos command dispatch
* Prevent update from being handled by `handle_repos_command`
* Ensure top-level `update` always uses UpdateManager
* Fix "Unknown repos command: update" error after refactor

https://chatgpt.com/share/693e7ee9-2658-800f-985f-293ed0c8efbc
2025-12-14 10:09:46 +01:00
Kevin Veen-Birkenbach
55f4a1e941 refactor(update): move update logic to unified UpdateManager and extend system support
Some checks failed
CI / test-unit (push) Has been cancelled
CI / test-integration (push) Has been cancelled
CI / test-env-virtual (push) Has been cancelled
CI / test-env-nix (push) Has been cancelled
CI / test-e2e (push) Has been cancelled
CI / test-virgin-user (push) Has been cancelled
CI / test-virgin-root (push) Has been cancelled
CI / codesniffer-shellcheck (push) Has been cancelled
CI / codesniffer-ruff (push) Has been cancelled
- Move update orchestration from repository scope to actions/update
- Introduce UpdateManager and SystemUpdater with distro detection
- Add Arch, Debian/Ubuntu, and Fedora/RHEL system update handling
- Rename CLI flag from --system-update to --system
- Route update as a top-level command in CLI dispatch
- Remove legacy update_repos implementation
- Add E2E tests for:
  - update all without system updates
  - update single repo (pkgmgr) with system updates

https://chatgpt.com/share/693e76ec-5ee4-800f-9623-3983f56d5430
2025-12-14 09:35:52 +01:00
Kevin Veen-Birkenbach
2a4ec18532 Changed argument order
Some checks failed
CI / test-unit (push) Has been cancelled
CI / test-integration (push) Has been cancelled
CI / test-env-virtual (push) Has been cancelled
CI / test-env-nix (push) Has been cancelled
CI / test-e2e (push) Has been cancelled
CI / test-virgin-user (push) Has been cancelled
CI / test-virgin-root (push) Has been cancelled
CI / codesniffer-shellcheck (push) Has been cancelled
CI / codesniffer-ruff (push) Has been cancelled
2025-12-14 08:51:37 +01:00
Kevin Veen-Birkenbach
2debdbee09 * **Split mirror responsibilities into clear subcommands**
Some checks failed
CI / test-unit (push) Has been cancelled
CI / test-integration (push) Has been cancelled
CI / test-env-virtual (push) Has been cancelled
CI / test-env-nix (push) Has been cancelled
CI / test-e2e (push) Has been cancelled
CI / test-virgin-user (push) Has been cancelled
CI / test-virgin-root (push) Has been cancelled
CI / codesniffer-shellcheck (push) Has been cancelled
CI / codesniffer-ruff (push) Has been cancelled
Setup configures local Git state, check validates remote reachability in a read-only way, and provision explicitly creates missing remote repositories. Destructive behavior is never implicit.

* **Introduce a remote provisioning layer**
  pkgmgr can now ensure that repositories exist on remote providers. If a repository is missing, it can be created automatically on supported platforms when explicitly requested.

* **Add a provider registry for extensibility**
  Providers are resolved based on the remote host, with optional hints to force a specific backend. This makes it straightforward to add further providers later without changing the core logic.

* **Use a lightweight, dependency-free HTTP client**
  All API communication is handled via a small stdlib-based client. HTTP errors are mapped to meaningful domain errors, improving diagnostics and error handling consistency.

* **Centralize credential resolution**
  API tokens are resolved in a strict order: environment variables first, then the system keyring, and finally an interactive prompt if allowed. This works well for both CI and interactive use.

* **Keep keyring integration optional**
  Secure token storage via the OS keyring is provided as an optional dependency. If unavailable, pkgmgr still works using environment variables or one-off interactive tokens.

* **Improve CLI parser safety and clarity**
  Shared argument helpers now guard against duplicate definitions, making composed subcommands more robust and easier to maintain.

* **Expand end-to-end test coverage**
  All mirror-related workflows are exercised through real CLI invocations in preview mode, ensuring full wiring correctness while remaining safe for automated test environments.

https://chatgpt.com/share/693df441-a780-800f-bcf7-96e06cc9e421
2025-12-14 00:16:54 +01:00
Kevin Veen-Birkenbach
4cb62e90f8 refactor: move nix experimental feature setup to nix.conf and rename pkgmgr wrapper
Some checks failed
CI / test-unit (push) Has been cancelled
CI / test-integration (push) Has been cancelled
CI / test-env-virtual (push) Has been cancelled
CI / test-env-nix (push) Has been cancelled
CI / test-e2e (push) Has been cancelled
CI / test-virgin-user (push) Has been cancelled
CI / test-virgin-root (push) Has been cancelled
CI / codesniffer-shellcheck (push) Has been cancelled
CI / codesniffer-ruff (push) Has been cancelled
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
https://chatgpt.com/share/693dcbad-3d30-800f-acfe-22f7263f3e80
2025-12-13 21:25:02 +01:00
Kevin Veen-Birkenbach
923519497a Updated Homepage
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-13 20:41:06 +01:00
Kevin Veen-Birkenbach
5fa18cb449 Merge branch 'fix/self-install'
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-13 20:09:17 +01:00
Kevin Veen-Birkenbach
f513196911 Used correct tabulation
Some checks failed
CI / test-unit (push) Has been cancelled
CI / test-integration (push) Has been cancelled
CI / test-env-virtual (push) Has been cancelled
CI / test-env-nix (push) Has been cancelled
CI / test-e2e (push) Has been cancelled
CI / test-virgin-user (push) Has been cancelled
CI / test-virgin-root (push) Has been cancelled
CI / codesniffer-shellcheck (push) Has been cancelled
CI / codesniffer-ruff (push) Has been cancelled
2025-12-13 20:08:30 +01:00
Kevin Veen-Birkenbach
7f06447bbd feat(cli): add --system-update flag to update command
Some checks failed
CI / test-unit (push) Has been cancelled
CI / test-integration (push) Has been cancelled
CI / test-env-virtual (push) Has been cancelled
CI / test-env-nix (push) Has been cancelled
CI / test-e2e (push) Has been cancelled
CI / test-virgin-user (push) Has been cancelled
CI / test-virgin-root (push) Has been cancelled
CI / codesniffer-shellcheck (push) Has been cancelled
CI / codesniffer-ruff (push) Has been cancelled
- Register --system-update for `pkgmgr update`
- Expose args.system_update for update workflow
- Align CLI with update_repos and E2E tests

https://chatgpt.com/share/693db645-c420-800f-b921-9d5c0356d0ac
2025-12-13 20:02:48 +01:00
Kevin Veen-Birkenbach
1e5d6d3eee test(unit): update NixFlakeInstaller tests for new run_command-based logic
- Adapt DummyCtx to include quiet and force_update flags
- Replace os.system mocking with run_command/subprocess mocks
- Align assertions with new Nix install/upgrade output
- Keep coverage for mandatory vs optional output handling

https://chatgpt.com/share/693db645-c420-800f-b921-9d5c0356d0ac
2025-12-13 19:53:34 +01:00
Kevin Veen-Birkenbach
f2970adbb2 test(e2e): enforce --system-update and isolate update-all integration tests
- Require --system-update for update-all integration tests
- Run tests with isolated HOME and temporary gitconfig
- Allow /src as git safe.directory for nix run
- Capture and print combined stdout/stderr on failure
- Ensure consistent environment for pkgmgr and nix-run executions
2025-12-13 19:49:40 +01:00
Kevin Veen-Birkenbach
7f262c6557 feat(install): add --update to re-run active-layer installers and improve Nix refresh logic
Some checks failed
CI / test-unit (push) Has been cancelled
CI / test-integration (push) Has been cancelled
CI / test-env-virtual (push) Has been cancelled
CI / test-env-nix (push) Has been cancelled
CI / test-e2e (push) Has been cancelled
CI / test-virgin-user (push) Has been cancelled
CI / test-virgin-root (push) Has been cancelled
CI / codesniffer-shellcheck (push) Has been cancelled
CI / codesniffer-ruff (push) Has been cancelled
* Add `force_update` to `RepoContext` and propagate it through install/update flows
* Add `pkgmgr install --update` to force re-running installers even if the same CLI layer is already loaded
* Enhance `NixFlakeInstaller` to ensure correct outputs (pkgmgr + optional default for package-manager) and support refresh/upgrade with index-based fallback remove+reinstall
* Make Python/Makefile installers emit an “upgraded” marker when `force_update` is used
* Add E2E tests for “three times install” scenarios (makefile, nix, venv) with shared run helper
* Fix git safe.directory wildcard quoting in E2E shell runner and minor cleanup/reordering of imports/comments

https://chatgpt.com/share/693db0b4-6ea4-800f-b44a-f03939c7fb9e
2025-12-13 19:30:06 +01:00
Kevin Veen-Birkenbach
0bc7a3ecc0 ci(nix): retry flake evaluation on GitHub API rate limits
Some checks failed
CI / test-unit (push) Has been cancelled
CI / test-integration (push) Has been cancelled
CI / test-env-virtual (push) Has been cancelled
CI / test-env-nix (push) Has been cancelled
CI / test-e2e (push) Has been cancelled
CI / test-virgin-user (push) Has been cancelled
CI / test-virgin-root (push) Has been cancelled
CI / codesniffer-shellcheck (push) Has been cancelled
CI / codesniffer-ruff (push) Has been cancelled
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
Add a reusable retry helper that detects GitHub API 403 rate-limit errors
during Nix flake evaluation and retries with exponential backoff.

Apply the retry logic to flake-only CI tests so transient GitHub rate
limits no longer cause random CI failures while preserving fast failure
for real errors.

https://chatgpt.com/share/693d7ec5-ac70-800f-a627-ef705c653ba1
2025-12-13 15:57:05 +01:00
Kevin Veen-Birkenbach
55a0ae4337 Release version 1.5.0
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-13 15:43:19 +01:00
Kevin Veen-Birkenbach
bcf284c5d6 Solved variable naming bug
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-13 15:33:38 +01:00
Kevin Veen-Birkenbach
db23b1a445 Solved ruff hints
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-13 15:30:10 +01:00
Kevin Veen-Birkenbach
506f69d8a7 Solved variable bug
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-13 15:27:06 +01:00
Kevin Veen-Birkenbach
097e64408f Fix repository deinstall logic and add unit tests for repository helpers
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
- Fix undefined repo_dir usage in repository deinstall action
- Centralize and harden get_repo_dir with strict validation and clear errors
- Expand user paths for repository base and binary directories
- Add unit tests for get_repo_dir and deinstall_repos
- Add comprehensive tests for resolve_repos identifier matching
- Remove obsolete command resolution tests no longer applicable

https://chatgpt.com/share/693d7442-c2d0-800f-9ff3-fb84d60eaeb4
2025-12-13 15:12:12 +01:00
Kevin Veen-Birkenbach
a3913d9489 Solved variable bug
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-13 15:05:34 +01:00
Kevin Veen-Birkenbach
c92fd44dd3 fix(uninstall): robustly remove pkgmgr venv auto-activation and leftover shell RC entries
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-13 14:48:59 +01:00
Kevin Veen-Birkenbach
2c3efa7a27 Solved shellcheck quoting issue
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-13 14:38:37 +01:00
Kevin Veen-Birkenbach
f388bc51bc Ruff autofix
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-13 14:36:55 +01:00
Kevin Veen-Birkenbach
4e28eba883 refactor(ci,build,test): rename distro to PKGMGR_DISTRO for consistent environment handling
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
https://chatgpt.com/share/693d6b63-12cc-800f-b55f-abc52ee7fb52
2025-12-13 14:34:15 +01:00
Kevin Veen-Birkenbach
b8acd634f8 Improve run_command error diagnostics with live output capture
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
Switch run_command to a single-run execution model that streams stdout/stderr
live while capturing both streams in memory using selectors. This guarantees
that command errors (e.g. make install, pip, nix) always show full diagnostics
without re-running commands or risking deadlocks.

Add unit tests for preview mode, success execution, failure handling, and
allow_failure behavior.

Context:
https://chatgpt.com/share/replace-with-this-conversation-link
2025-12-13 14:29:53 +01:00
Kevin Veen-Birkenbach
fb68b325d6 Fix ShellCheck warnings and harden shell scripts
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
- Quote Docker volume names to avoid word splitting
- Add missing shebangs for proper shell detection
- Annotate sourced scripts for ShellCheck resolution
- Remove unused variables
- Explicitly disable SC2016 where literal RC strings are intended
- Improve robustness of cleanup logic

https://chatgpt.com/share/693d6557-a080-800f-8915-c57476569232
2025-12-13 14:08:35 +01:00
Kevin Veen-Birkenbach
650a22d425 Changed other formatation codesniffer solution
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-13 14:00:06 +01:00
Kevin Veen-Birkenbach
6a590d8780 Solved save user config bug 2025-12-13 13:55:49 +01:00
Kevin Veen-Birkenbach
5601ea442a **Refactor CI: make Ruff and ShellCheck reusable via workflow_call**
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
* Convert Ruff and ShellCheck workflows to `workflow_call`
* Remove direct `push` / `pull_request` triggers
* Run sniffers only through centralized CI and release pipelines
* Prevent duplicate and uncontrolled sniffer executions

https://chatgpt.com/share/693d5f9a-5e70-800f-95da-837be2aedb4f
2025-12-13 13:44:04 +01:00
Kevin Veen-Birkenbach
5ff15013d7 Fix: remove unnecessary f-strings without interpolation
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
Ruff (Python code sniffer) / codesniffer-ruff (push) Has been cancelled
ShellCheck / codesniffer-shellcheck (push) Has been cancelled
Remove extraneous f-string prefixes from string literals that do not contain
placeholders. This resolves Ruff F541 warnings without changing runtime
behavior or output.

https://chatgpt.com/share/693d5f15-f9e8-800f-bf69-b0dee0e4449c
2025-12-13 13:41:26 +01:00
Kevin Veen-Birkenbach
6ccc1c1490 Removed further Optional double imports
Some checks failed
Ruff (Python code sniffer) / codesniffer-ruff (push) Has been cancelled
ShellCheck / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-13 13:36:11 +01:00
Kevin Veen-Birkenbach
8ead3472dd Removed double import 2025-12-13 13:33:34 +01:00
Kevin Veen-Birkenbach
422ac8b837 **Enable Nix experimental features system-wide and refactor Nix bootstrap config**
Some checks failed
Ruff (Python code sniffer) / codesniffer-ruff (push) Has been cancelled
ShellCheck / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
* Rename `config.sh` to `bootstrap_config.sh` to clearly separate installer bootstrap config from Nix system config
* Add `nix_conf_file.sh` to manage `/etc/nix/nix.conf` safely and idempotently
* Ensure `nix-command` and `flakes` are enabled without overwriting existing experimental features
* Invoke Nix config enforcement from `nix/init.sh` during root installation
* Update documentation and ShellCheck annotations accordingly
* Extend CLI git proxy to include `git status`

https://chatgpt.com/share/693d5c4a-bad0-800f-adaf-4719dd4ca377
2025-12-13 13:29:48 +01:00
Kevin Veen-Birkenbach
ea84c1b14e Add ShellCheck and Ruff code sniffers to CI and release workflows
Some checks failed
Ruff (Python code sniffer) / codesniffer-ruff (push) Has been cancelled
ShellCheck / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / codesniffer-shellcheck (push) Has been cancelled
Mark stable commit / codesniffer-ruff (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
- Introduce dedicated ShellCheck workflow for Bash scripts
- Add Ruff as Python code sniffer for src/ and tests/
- Integrate both sniffers into main CI pipeline
- Require successful sniffer runs before marking a release as stable
- Ensure consistent code quality checks across CI and release workflows

https://chatgpt.com/share/693d5b26-293c-800f-999d-48b2950b9417
2025-12-13 13:24:58 +01:00
Kevin Veen-Birkenbach
71a4e7e725 Added git status proxy
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-13 13:13:03 +01:00
Kevin Veen-Birkenbach
fb737ef290 Optimized Changelog
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-13 08:40:37 +01:00
Kevin Veen-Birkenbach
2963a43754 **Refactor README: streamline rationale, features, install and run sections**
* Simplify *Why PKGMGR* into concise prose and add Docker images as reproducible system baselines linked to Infinito.Nexus
* Condense Features into a single, readable overview without command lists
* Clean up Architecture section and keep diagram metadata consistent
* Reorganize Installation with clear download, dependencies, install and setup modes
* Introduce a unified *Run PKGMGR* section differentiating Nix, Docker and venv usage with consistent examples
2025-12-13 08:34:39 +01:00
Kevin Veen-Birkenbach
103f49c8f6 Release version 1.4.1
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-12 23:06:15 +01:00
Kevin Veen-Birkenbach
f5d428950e **Replace main.py with module-based entry point and unify CLI execution**
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
* Remove legacy *main.py* and introduce *pkgmgr* module entry via *python -m pkgmgr*
* Add ***main**.py* as the canonical entry point delegating to the CLI
* Export *PYTHONPATH=src* in Makefile to ensure reliable imports in dev and CI
* Update setup scripts (venv & nix) to use module execution
* Refactor all E2E tests to execute the real module entry instead of file paths

This aligns pkgmgr with standard Python packaging practices and simplifies testing, setup, and execution across environments.

https://chatgpt.com/share/693c9056-716c-800f-b583-fc9245eab2b4
2025-12-12 22:59:46 +01:00
Kevin Veen-Birkenbach
b40787ffc5 ci: publish GHCR images after successful mark-stable workflow
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
Trigger container publishing via workflow_run on "Mark stable commit", gate on success,
checkout the workflow_run head SHA, force-refresh tags, and derive version from the v* tag
pointing at the tested commit to correctly detect and publish stable images.

https://chatgpt.com/share/693c836b-0b00-800f-9536-9e273abd0fb5
2025-12-12 22:50:33 +01:00
151 changed files with 4202 additions and 1716 deletions

View File

@@ -27,3 +27,9 @@ jobs:
test-virgin-root:
uses: ./.github/workflows/test-virgin-root.yml
codesniffer-shellcheck:
uses: ./.github/workflows/codesniffer-shellcheck.yml
codesniffer-ruff:
uses: ./.github/workflows/codesniffer-ruff.yml

23
.github/workflows/codesniffer-ruff.yml vendored Normal file
View File

@@ -0,0 +1,23 @@
name: Ruff (Python code sniffer)
on:
workflow_call:
jobs:
codesniffer-ruff:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install ruff
run: pip install ruff
- name: Run ruff
run: |
ruff check src tests

View File

@@ -0,0 +1,14 @@
name: ShellCheck
on:
workflow_call:
jobs:
codesniffer-shellcheck:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install ShellCheck
run: sudo apt-get update && sudo apt-get install -y shellcheck
- name: Run ShellCheck
run: shellcheck -x $(find scripts -type f -name '*.sh' -print)

View File

@@ -29,8 +29,16 @@ jobs:
test-virgin-root:
uses: ./.github/workflows/test-virgin-root.yml
codesniffer-shellcheck:
uses: ./.github/workflows/codesniffer-shellcheck.yml
codesniffer-ruff:
uses: ./.github/workflows/codesniffer-ruff.yml
mark-stable:
needs:
- codesniffer-shellcheck
- codesniffer-ruff
- test-unit
- test-integration
- test-env-nix

View File

@@ -1,12 +1,13 @@
name: Publish container images (GHCR)
on:
push:
tags:
- "v*"
workflow_run:
workflows: ["Mark stable commit"]
types: [completed]
jobs:
publish:
if: ${{ github.event.workflow_run.conclusion == 'success' }}
runs-on: ubuntu-latest
permissions:
@@ -20,13 +21,22 @@ jobs:
fetch-depth: 0
fetch-tags: true
- name: Checkout workflow_run commit and refresh tags
run: |
set -euo pipefail
git checkout -f "${{ github.event.workflow_run.head_sha }}"
git fetch --tags --force
git tag --list 'stable' 'v*' --sort=version:refname | tail -n 20
- name: Compute version and stable flag
id: info
run: |
set -euo pipefail
SHA="$(git rev-parse HEAD)"
VERSION="${GITHUB_REF_NAME#v}"
V_TAG="$(git tag --points-at "${SHA}" --list 'v*' | sort -V | tail -n1)"
[[ -n "$V_TAG" ]] || { echo "No version tag found"; exit 1; }
VERSION="${V_TAG#v}"
STABLE_SHA="$(git rev-parse -q --verify refs/tags/stable^{commit} 2>/dev/null || true)"
IS_STABLE=false

View File

@@ -22,4 +22,4 @@ jobs:
- name: Run E2E tests via make (${{ matrix.distro }})
run: |
set -euo pipefail
distro="${{ matrix.distro }}" make test-e2e
PKGMGR_DISTRO="${{ matrix.distro }}" make test-e2e

View File

@@ -23,4 +23,4 @@ jobs:
- name: Nix flake-only test (${{ matrix.distro }})
run: |
set -euo pipefail
distro="${{ matrix.distro }}" make test-env-nix
PKGMGR_DISTRO="${{ matrix.distro }}" make test-env-nix

View File

@@ -25,4 +25,4 @@ jobs:
- name: Run container tests (${{ matrix.distro }})
run: |
set -euo pipefail
distro="${{ matrix.distro }}" make test-env-virtual
PKGMGR_DISTRO="${{ matrix.distro }}" make test-env-virtual

View File

@@ -16,4 +16,4 @@ jobs:
run: docker version
- name: Run integration tests via make (Arch container)
run: make test-integration distro="arch"
run: make test-integration PKGMGR_DISTRO="arch"

View File

@@ -16,4 +16,4 @@ jobs:
run: docker version
- name: Run unit tests via make (Arch container)
run: make test-unit distro="arch"
run: make test-unit PKGMGR_DISTRO="arch"

View File

@@ -23,7 +23,7 @@ jobs:
- name: Build virgin container (${{ matrix.distro }})
run: |
set -euo pipefail
distro="${{ matrix.distro }}" make build-missing-virgin
PKGMGR_DISTRO="${{ matrix.distro }}" make build-missing-virgin
# 🔹 RUN test inside virgin image
- name: Virgin ${{ matrix.distro }} pkgmgr test (root)
@@ -46,8 +46,6 @@ jobs:
. "$HOME/.venvs/pkgmgr/bin/activate"
export NIX_CONFIG="experimental-features = nix-command flakes"
pkgmgr update pkgmgr --clone-mode shallow --no-verification
pkgmgr version pkgmgr

View File

@@ -23,7 +23,7 @@ jobs:
- name: Build virgin container (${{ matrix.distro }})
run: |
set -euo pipefail
distro="${{ matrix.distro }}" make build-missing-virgin
PKGMGR_DISTRO="${{ matrix.distro }}" make build-missing-virgin
# 🔹 RUN test inside virgin image as non-root
- name: Virgin ${{ matrix.distro }} pkgmgr test (user)
@@ -59,7 +59,6 @@ jobs:
pkgmgr version pkgmgr
export NIX_REMOTE=local
export NIX_CONFIG=\"experimental-features = nix-command flakes\"
nix run /src#pkgmgr -- version pkgmgr
"
'

View File

@@ -1,6 +1,59 @@
## [1.6.3] - 2025-12-14
* ***Fixed:*** Corrected repository path resolution so release and version logic consistently use the canonical packaging/* layout, preventing changelog and packaging files from being read or updated from incorrect locations.
## [1.6.2] - 2025-12-14
* **pkgmgr version** now also shows the installed pkgmgr version when run outside a repository.
## [1.6.1] - 2025-12-14
* * Added automatic retry handling for GitHub 403 / rate-limit errors during Nix flake installs (Fibonacci backoff with jitter).
## [1.6.0] - 2025-12-14
* *** Changed ***
- Unified update handling via a single top-level `pkgmgr update` command, removing ambiguous update paths.
- Improved update reliability by routing all update logic through a central UpdateManager.
- Renamed system update flag from `--system-update` to `--system` for clarity and consistency.
- Made mirror handling explicit and safer by separating setup, check, and provision responsibilities.
- Improved credential resolution for remote providers (environment → keyring → interactive).
*** Added ***
- Optional system updates via `pkgmgr update --system` (Arch, Debian/Ubuntu, Fedora/RHEL).
- `pkgmgr install --update` to force re-running installers and refresh existing installations.
- Remote repository provisioning for mirrors on supported providers.
- Extended end-to-end test coverage for update and mirror workflows.
*** Fixed ***
- Resolved “Unknown repos command: update” errors after CLI refactoring.
- Improved Nix update stability and reduced CI failures caused by transient rate limits.
## [1.5.0] - 2025-12-13
* - Commands now show live output while running, making long operations easier to follow
- Error messages include full command output, making failures easier to understand and debug
- Deinstallation is more complete and predictable, removing CLI links and properly cleaning up repositories
- Preview mode is more trustworthy, clearly showing what would happen without making changes
- Repository configuration problems are detected earlier with clear, user-friendly explanations
- More consistent behavior across different Linux distributions
- More reliable execution in Docker containers and CI environments
- Nix-based execution works more smoothly, especially when running as root or inside containers
- Existing commands, scripts, and workflows continue to work without any breaking changes
## [1.4.1] - 2025-12-12
* Fixed stable release container publishing
## [1.4.0] - 2025-12-12
* **Docker Container Building**
**Docker Container Building**
* New official container images are automatically published on each release.
* Images are available per distribution and as a default Arch-based image.
@@ -14,7 +67,7 @@
## [1.3.0] - 2025-12-12
* **Minor release Stability & CI hardening**
**Stability & CI hardening**
* Stabilized Nix resolution and global symlink handling across Arch, CentOS, Debian, and Ubuntu
* Ensured Nix works reliably in CI, sudo, login, and non-login shells without overriding distro-managed paths
@@ -26,7 +79,7 @@
## [1.2.1] - 2025-12-12
* **Changed**
**Changed**
* Split container tests into *virtualenv* and *Nix flake* environments to clearly separate Python and Nix responsibilities.
@@ -43,7 +96,7 @@
## [1.2.0] - 2025-12-12
* **Release workflow overhaul**
**Release workflow overhaul**
* Introduced a fully structured release workflow with clear phases and safeguards
* Added preview-first releases with explicit confirmation before execution
@@ -60,7 +113,8 @@
## [1.0.0] - 2025-12-11
* **1.0.0 Official Stable Release 🎉**
**Official Stable Release 🎉**
*First stable release of PKGMGR, the multi-distro development and package workflow manager.*
---
@@ -153,7 +207,7 @@ PKGMGR 1.0.0 unifies repository management, build tooling, release automation an
## [0.9.1] - 2025-12-10
* * Refactored installer: new `venv-create.sh`, cleaner root/user setup flow, updated README with architecture map.
* Refactored installer: new `venv-create.sh`, cleaner root/user setup flow, updated README with architecture map.
* Split virgin tests into root/user workflows; stabilized Nix installer across distros; improved test scripts with dynamic distro selection and isolated Nix stores.
* Fixed repository directory resolution; improved `pkgmgr path` and `pkgmgr shell`; added full unit/E2E coverage.
* Removed deprecated files and updated `.gitignore`.
@@ -248,47 +302,45 @@ PKGMGR 1.0.0 unifies repository management, build tooling, release automation an
## [0.7.1] - 2025-12-09
* Fix floating 'latest' tag logic: dereference annotated target (vX.Y.Z^{}), add tag message to avoid Git errors, ensure best-effort update without blocking releases, and update unit tests (see ChatGPT conversation: https://chatgpt.com/share/69383024-efa4-800f-a875-129b81fa40ff).
* Fix floating 'latest' tag logic
* dereference annotated target (vX.Y.Z^{})
* add tag message to avoid Git errors
* ensure best-effort update without blocking releases
## [0.7.0] - 2025-12-09
* Add Git helpers for branch sync and floating 'latest' tag in the release workflow, ensure main/master are updated from origin before tagging, and extend unit/e2e tests including 'pkgmgr release --help' coverage (see ChatGPT conversation: https://chatgpt.com/share/69383024-efa4-800f-a875-129b81fa40ff)
* Add Git helpers for branch sync and floating 'latest' tag in the release workflow
* ensure main/master are updated from origin before tagging
## [0.6.0] - 2025-12-09
* Expose DISTROS and BASE_IMAGE_* variables as exported Makefile environment variables so all build and test commands can consume them dynamically. By exporting these values, every Make target (e.g., build, build-no-cache, build-missing, test-container, test-unit, test-e2e) and every delegated script in scripts/build/ and scripts/test/ now receives a consistent view of the supported distributions and their base container images. This change removes duplicated definitions across scripts, ensures reproducible builds, and allows build tooling to react automatically when new distros or base images are added to the Makefile.
* Consistent view of the supported distributions and their base container images.
## [0.5.1] - 2025-12-09
* Refine pkgmgr release CLI close wiring and integration tests for --close flag (ChatGPT: https://chatgpt.com/share/69376b4e-8440-800f-9d06-535ec1d7a40e)
* Refine pkgmgr release CLI close wiring and integration tests for --close flag
## [0.5.0] - 2025-12-09
* Add pkgmgr branch close subcommand, extend CLI parser wiring, and add unit tests for branch handling and version version-selection logic (see ChatGPT conversation: https://chatgpt.com/share/693762a3-9ea8-800f-a640-bc78170953d1)
* Add pkgmgr branch close subcommand, extend CLI parser wiring
## [0.4.3] - 2025-12-09
* Implement current-directory repository selection for release and proxy commands, unify selection semantics across CLI layers, extend release workflow with --close, integrate branch closing logic, fix wiring for get_repo_identifier/get_repo_dir, update packaging files (PKGBUILD, spec, flake.nix, pyproject), and add comprehensive unit/e2e tests for release and branch commands (see ChatGPT conversation: https://chatgpt.com/share/69375cfe-9e00-800f-bd65-1bd5937e1696)
* Implement current-directory repository selection for release and proxy commands, unify selection semantics across CLI layers, extend release workflow with --close, integrate branch closing logic, fix wiring for get_repo_identifier/get_repo_dir, update packaging files (PKGBUILD, spec, flake.nix, pyproject)
## [0.4.2] - 2025-12-09
* Wire pkgmgr release CLI to new helper and add unit tests (see ChatGPT conversation: https://chatgpt.com/share/69374f09-c760-800f-92e4-5b44a4510b62)
* Wire pkgmgr release CLI to new helpe
## [0.4.1] - 2025-12-08
* Add branch close subcommand and integrate release close/editor flow (ChatGPT: https://chatgpt.com/share/69374f09-c760-800f-92e4-5b44a4510b62)
* Add branch close subcommand and integrate release close/editor flow
## [0.4.0] - 2025-12-08
* Add branch closing helper and --close flag to release command, including CLI wiring and tests (see https://chatgpt.com/share/69374aec-74ec-800f-bde3-5d91dfdb9b91)
* Add branch closing helper and --close flag to release command
## [0.3.0] - 2025-12-08
@@ -299,13 +351,10 @@ PKGMGR 1.0.0 unifies repository management, build tooling, release automation an
- New config update logic + default YAML sync
- Improved proxy command handling
- Full CLI routing refactor
- Expanded E2E tests for list, proxy, and selection logic
Konversation: https://chatgpt.com/share/693745c3-b8d8-800f-aa29-c8481a2ffae1
## [0.2.0] - 2025-12-08
* Add preview-first release workflow and extended packaging support (see ChatGPT conversation: https://chatgpt.com/share/693722b4-af9c-800f-bccc-8a4036e99630)
* Add preview-first release workflow and extended packaging support
## [0.1.0] - 2025-12-08
@@ -314,5 +363,4 @@ Konversation: https://chatgpt.com/share/693745c3-b8d8-800f-aa29-c8481a2ffae1
## [0.1.0] - 2025-12-08
* Implement unified release helper with preview mode, multi-packaging version bumps, and new integration/unit tests (see ChatGPT conversation 2025-12-08: https://chatgpt.com/share/693722b4-af9c-800f-bccc-8a4036e99630)
* Implement unified release helper with preview mode, multi-packaging version bumps

View File

@@ -36,9 +36,6 @@ CMD ["bash"]
# ============================================================
FROM virgin AS full
# Nix environment defaults (only config; nix itself comes from deps/install flow)
ENV NIX_CONFIG="experimental-features = nix-command flakes"
WORKDIR /build
# Copy full repository for build

View File

@@ -7,8 +7,8 @@
# Distro
# Options: arch debian ubuntu fedora centos
DISTROS ?= arch debian ubuntu fedora centos
distro ?= arch
export distro
PKGMGR_DISTRO ?= arch
export PKGMGR_DISTRO
# ------------------------------------------------------------
# Base images
@@ -30,6 +30,7 @@ export BASE_IMAGE_CENTOS
# PYthon Unittest Pattern
TEST_PATTERN := test_*.py
export TEST_PATTERN
export PYTHONPATH := src
# ------------------------------------------------------------
# System install
@@ -43,9 +44,9 @@ install:
# ------------------------------------------------------------
# Default: keep current auto-detection behavior
setup: setup-nix setup-venv
setup: setup-venv
# Explicit: developer setup (Python venv + shell RC + main.py install)
# Explicit: developer setup (Python venv + shell RC + install)
setup-venv: setup-nix
@bash scripts/setup/venv.sh
@@ -74,7 +75,7 @@ build-no-cache-all:
@set -e; \
for d in $(DISTROS); do \
echo "=== build-no-cache: $$d ==="; \
distro="$$d" $(MAKE) build-no-cache; \
PKGMGR_DISTRO="$$d" $(MAKE) build-no-cache; \
done
# ------------------------------------------------------------
@@ -100,7 +101,7 @@ test-env-nix: build-missing
test: test-env-virtual test-unit test-integration test-e2e
delete-volumes:
@docker volume rm pkgmgr_nix_store_${distro} pkgmgr_nix_cache_${distro} || true
@docker volume rm "pkgmgr_nix_store_${PKGMGR_DISTRO}" "pkgmgr_nix_cache_${PKGMGR_DISTRO}" || echo "No volumes to delete."
purge: delete-volumes build-no-cache

167
README.md
View File

@@ -25,52 +25,37 @@ together into repeatable development workflows.
Traditional distro package managers like `apt`, `pacman` or `dnf` focus on a
single operating system. PKGMGR instead focuses on **your repositories and
development lifecycle**:
development lifecycle**. It provides one configuration for all repositories,
one unified CLI to interact with them, and a Nix-based foundation that keeps
tooling reproducible across distributions.
* one configuration for all your repos,
* one CLI to interact with them,
* one Nix-based layer to keep tooling reproducible across distros.
Native package managers are still used where they make sense. PKGMGR coordinates
the surrounding development, build and release workflows in a consistent way.
You keep using your native package manager where it makes sense PKGMGR
coordinates the *development and release flow* around it.
In addition, PKGMGR provides Docker images that can serve as a **reproducible
system baseline**. These images bundle the complete PKGMGR toolchain and are
designed to be reused as a stable execution environment across machines,
pipelines and teams. This approach is specifically used within
[**Infinito.Nexus**](https://s.infinito.nexus/code) to make complex systems
distribution-independent while remaining fully reproducible.
---
## Features 🚀
### Multi-distro development & packaging
PKGMGR enables multi-distro development and packaging by managing multiple
repositories from a single configuration file. It drives complete release
pipelines across Linux distributions using Nix flakes, Python build metadata,
native OS packages such as Arch, Debian and RPM formats, and additional ecosystem
integrations like Ansible.
* Manage **many repositories at once** from a single `config/config.yaml`.
* Drive full **release pipelines** across Linux distributions using:
All functionality is exposed through a unified `pkgmgr` command-line interface
that works identically on every supported distribution. It combines repository
management, Git operations, Docker and Compose orchestration, as well as
versioning, release and changelog workflows. Many commands support a preview
mode, allowing you to inspect the underlying actions before they are executed.
* Nix flakes (`flake.nix`)
* PyPI style builds (`pyproject.toml`)
* OS packages (PKGBUILD, Debian control/changelog, RPM spec)
* Ansible Galaxy metadata and more.
### Rich CLI for daily work
All commands are exposed via the `pkgmgr` CLI and are available on every distro:
* **Repository management**
* `clone`, `update`, `install`, `delete`, `deinstall`, `path`, `list`, `config`
* **Git proxies**
* `pull`, `push`, `status`, `diff`, `add`, `show`, `checkout`,
`reset`, `revert`, `rebase`, `commit`, `branch`
* **Docker & Compose orchestration**
* `build`, `up`, `down`, `exec`, `ps`, `start`, `stop`, `restart`
* **Release toolchain**
* `version`, `release`, `changelog`, `make`
* **Mirror & workflow helpers**
* `mirror` (list/diff/merge/setup), `shell`, `terminal`, `code`, `explore`
Many of these commands support `--preview` mode so you can inspect the
underlying Git or Docker calls without executing them.
---
### Full development workflows
@@ -83,10 +68,6 @@ versioning features it can drive **end-to-end workflows**:
4. Build distro-specific packages.
5. Keep all mirrors and working copies in sync.
The extensive E2E tests (`tests/e2e/`) and GitHub Actions workflows (including
“virgin user” and “virgin root” Arch tests) validate these flows across
different Linux environments.
---
## Architecture & Setup Map 🗺️
@@ -99,25 +80,26 @@ The following diagram gives a full overview of:
![PKGMGR Architecture](assets/map.png)
**Diagram status:** 12 December 2025
**Always-up-to-date version:** [https://s.veen.world/pkgmgrmp](https://s.veen.world/pkgmgrmp)
---
Perfekt, dann hier die **noch kompaktere und korrekt differenzierte Version**, die **nur** zwischen
**`make setup`** und **`make setup-venv`** unterscheidet und exakt deinem Verhalten entspricht.
README-ready, ohne Over-Engineering.
---
## Installation ⚙️
PKGMGR can be installed using `make`.
The setup mode defines **which runtime layers are prepared**.
---
### Download
```bash
git clone https://github.com/kevinveenbirkenbach/package-manager.git
cd package-manager
```
### Dependency installation (optional)
System dependencies required **before running any *make* commands** are installed via:
@@ -128,8 +110,13 @@ scripts/installation/dependencies.sh
The script detects and normalizes the OS and installs the required **system-level dependencies** accordingly.
### Install
---
```bash
git clone https://github.com/kevinveenbirkenbach/package-manager.git
cd package-manager
make install
```
### Setup modes
@@ -138,17 +125,8 @@ The script detects and normalizes the OS and installs the required **system-leve
| **make setup** | Python venv **and** Nix | Full development & CI |
| **make setup-venv** | Python venv only | Local user setup |
---
### Install & setup
```bash
git clone https://github.com/kevinveenbirkenbach/package-manager.git
cd package-manager
make install
```
#### Full setup (venv + Nix)
##### Full setup (venv + Nix)
```bash
make setup
@@ -156,7 +134,7 @@ make setup
Use this for CI, servers, containers and full development workflows.
#### Venv-only setup
##### Venv-only setup
```bash
make setup-venv
@@ -167,38 +145,77 @@ Use this if you want PKGMGR isolated without Nix integration.
---
## Run without installation (Nix)
Alles klar 🙂
Hier ist der **RUN-Abschnitt ohne Gedankenstriche**, klar nach **Nix, Docker und venv** getrennt:
Run PKGMGR directly via Nix Flakes.
---
## Run PKGMGR 🧰
PKGMGR can be executed in different environments.
All modes expose the same CLI and commands.
---
### Run via Nix (no installation)
```bash
nix run github:kevinveenbirkenbach/package-manager#pkgmgr -- --help
```
Example:
---
```bash
nix run github:kevinveenbirkenbach/package-manager#pkgmgr -- version pkgmgr
```
### Run via Docker 🐳
Notes:
PKGMGR can be executed **inside Docker containers** for CI, testing and isolated
workflows.
---
* full flake URL required
* `--` separates Nix and PKGMGR arguments
* can be used alongside any setup mode
#### Container types
Two container types are available.
| Image type | Contains | Typical use |
| ---------- | ----------------------------- | ----------------------- |
| **Virgin** | Base OS + system dependencies | Clean test environments |
| **Stable** | PKGMGR + Nix (flakes enabled) | Ready-to-use workflows |
Example images:
* Virgin: `pkgmgr-arch-virgin`
* Stable: `ghcr.io/kevinveenbirkenbach/pkgmgr:stable`
Use **virgin images** for isolated test runs,
use the **stable image** for fast, reproducible execution.
---
## Usage 🧰
#### Run examples
After installation, the main entry point is:
```bash
docker run --rm -it \
-v "$PWD":/src \
-w /src \
ghcr.io/kevinveenbirkenbach/pkgmgr:stable \
pkgmgr --help
```
---
### Run via virtual environment (venv)
After activating the venv:
```bash
pkgmgr --help
```
This prints a list of all available subcommands.
The help for each command is available via:
---
This allows you to choose between zero install execution using Nix, fully prebuilt
Docker environments or local isolated venv setups with identical command behavior.
---

View File

@@ -26,17 +26,13 @@
packages = forAllSystems (system:
let
pkgs = nixpkgs.legacyPackages.${system};
# Single source of truth for pkgmgr: Python 3.11
# - Matches pyproject.toml: requires-python = ">=3.11"
# - Uses python311Packages so that PyYAML etc. are available
python = pkgs.python311;
pyPkgs = pkgs.python311Packages;
in
rec {
pkgmgr = pyPkgs.buildPythonApplication {
pname = "package-manager";
version = "1.4.0";
version = "1.6.3";
# Use the git repo as source
src = ./.;

14
main.py
View File

@@ -1,14 +0,0 @@
#!/usr/bin/env python3
import sys
from pathlib import Path
# Ensure local src/ overrides installed package
ROOT = Path(__file__).resolve().parent
SRC = ROOT / "src"
if SRC.is_dir():
sys.path.insert(0, str(SRC))
from pkgmgr.cli import main
if __name__ == "__main__":
main()

View File

@@ -1,7 +1,7 @@
# Maintainer: Kevin Veen-Birkenbach <info@veen.world>
pkgname=package-manager
pkgver=0.9.1
pkgver=1.6.3
pkgrel=1
pkgdesc="Local-flake wrapper for Kevin's package-manager (Nix-based)."
arch=('any')
@@ -47,7 +47,7 @@ package() {
cd "$srcdir/$_srcdir_name"
# Install the wrapper into /usr/bin
install -Dm0755 "scripts/pkgmgr-wrapper.sh" \
install -Dm0755 "scripts/launcher.sh" \
"$pkgdir/usr/bin/pkgmgr"
# Install Nix bootstrap (init + lib)

View File

@@ -1,3 +1,9 @@
package-manager (1.6.3-1) unstable; urgency=medium
* ***Fixed:*** Corrected repository path resolution so release and version logic consistently use the canonical packaging/* layout, preventing changelog and packaging files from being read or updated from incorrect locations.
-- Kevin Veen-Birkenbach <kevin@veen.world> Sun, 14 Dec 2025 13:39:52 +0100
package-manager (0.9.1-1) unstable; urgency=medium
* * Refactored installer: new `venv-create.sh`, cleaner root/user setup flow, updated README with architecture map.

View File

@@ -28,7 +28,7 @@ override_dh_auto_install:
install -d debian/package-manager/usr/lib/package-manager
# Install wrapper
install -m0755 scripts/pkgmgr-wrapper.sh \
install -m0755 scripts/launcher.sh \
debian/package-manager/usr/bin/pkgmgr
# Install Nix bootstrap (init + lib)

View File

@@ -1,5 +1,5 @@
Name: package-manager
Version: 0.9.1
Version: 1.6.3
Release: 1%{?dist}
Summary: Wrapper that runs Kevin's package-manager via Nix flake
@@ -42,7 +42,7 @@ install -d %{buildroot}/usr/lib/package-manager
cp -a . %{buildroot}/usr/lib/package-manager/
# Wrapper
install -m0755 scripts/pkgmgr-wrapper.sh %{buildroot}%{_bindir}/pkgmgr
install -m0755 scripts/launcher.sh %{buildroot}%{_bindir}/pkgmgr
# Nix bootstrap (init + lib)
install -d %{buildroot}/usr/lib/package-manager/nix
@@ -74,6 +74,9 @@ echo ">>> package-manager removed. Nix itself was not removed."
/usr/lib/package-manager/
%changelog
* Sun Dec 14 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 1.6.3-1
- ***Fixed:*** Corrected repository path resolution so release and version logic consistently use the canonical packaging/* layout, preventing changelog and packaging files from being read or updated from incorrect locations.
* Wed Dec 10 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.9.1-1
- * Refactored installer: new `venv-create.sh`, cleaner root/user setup flow, updated README with architecture map.
* Split virgin tests into root/user workflows; stabilized Nix installer across distros; improved test scripts with dynamic distro selection and isolated Nix stores.

View File

@@ -7,7 +7,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "package-manager"
version = "1.4.0"
version = "1.6.3"
description = "Kevin's package-manager tool (pkgmgr)"
readme = "README.md"
requires-python = ">=3.9"
@@ -19,16 +19,17 @@ authors = [
# Base runtime dependencies
dependencies = [
"PyYAML>=6.0"
"PyYAML>=6.0",
"tomli; python_version < \"3.11\"",
]
[project.urls]
Homepage = "https://github.com/kevinveenbirkenbach/package-manager"
Homepage = "https://s.veen.world/pkgmgr"
Source = "https://github.com/kevinveenbirkenbach/package-manager"
[project.optional-dependencies]
keyring = ["keyring>=24.0.0"]
dev = [
"pytest",
"mypy"
]

View File

@@ -8,13 +8,13 @@ set -euo pipefail
: "${BASE_IMAGE_CENTOS:=quay.io/centos/centos:stream9}"
resolve_base_image() {
local distro="$1"
case "$distro" in
local PKGMGR_DISTRO="$1"
case "$PKGMGR_DISTRO" in
arch) echo "$BASE_IMAGE_ARCH" ;;
debian) echo "$BASE_IMAGE_DEBIAN" ;;
ubuntu) echo "$BASE_IMAGE_UBUNTU" ;;
fedora) echo "$BASE_IMAGE_FEDORA" ;;
centos) echo "$BASE_IMAGE_CENTOS" ;;
*) echo "ERROR: Unknown distro '$distro'" >&2; exit 1 ;;
*) echo "ERROR: Unknown distro '$PKGMGR_DISTRO'" >&2; exit 1 ;;
esac
}

View File

@@ -2,9 +2,11 @@
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
# shellcheck source=./scripts/build/base.sh
source "${SCRIPT_DIR}/base.sh"
: "${distro:?Environment variable 'distro' must be set (arch|debian|ubuntu|fedora|centos)}"
: "${PKGMGR_DISTRO:?Environment variable 'PKGMGR_DISTRO' must be set (arch|debian|ubuntu|fedora|centos)}"
NO_CACHE=0
MISSING_ONLY=0
@@ -20,13 +22,13 @@ IS_STABLE="false" # "true" -> publish stable tags
DEFAULT_DISTRO="arch"
usage() {
local default_tag="pkgmgr-${distro}"
local default_tag="pkgmgr-${PKGMGR_DISTRO}"
if [[ -n "${TARGET:-}" ]]; then
default_tag="${default_tag}-${TARGET}"
fi
cat <<EOF
Usage: distro=<distro> $0 [options]
Usage: PKGMGR_DISTRO=<distro> $0 [options]
Build options:
--missing Build only if the image does not already exist (local build only)
@@ -101,13 +103,13 @@ done
# Derive default local tag if not provided
if [[ -z "${IMAGE_TAG}" ]]; then
IMAGE_TAG="${REPO_PREFIX}-${distro}"
IMAGE_TAG="${REPO_PREFIX}-${PKGMGR_DISTRO}"
if [[ -n "${TARGET}" ]]; then
IMAGE_TAG="${IMAGE_TAG}-${TARGET}"
fi
fi
BASE_IMAGE="$(resolve_base_image "$distro")"
BASE_IMAGE="$(resolve_base_image "$PKGMGR_DISTRO")"
# Local-only "missing" shortcut
if [[ "${MISSING_ONLY}" == "1" ]]; then
@@ -139,7 +141,7 @@ fi
echo
echo "------------------------------------------------------------"
echo "[build] Building image"
echo "distro = ${distro}"
echo "distro = ${PKGMGR_DISTRO}"
echo "BASE_IMAGE = ${BASE_IMAGE}"
if [[ -n "${TARGET}" ]]; then echo "target = ${TARGET}"; fi
if [[ "${NO_CACHE}" == "1" ]]; then echo "cache = disabled"; fi
@@ -165,14 +167,14 @@ if [[ -n "${TARGET}" ]]; then
fi
compute_publish_tags() {
local distro_tag_base="${REGISTRY}/${OWNER}/${REPO_PREFIX}-${distro}"
local distro_tag_base="${REGISTRY}/${OWNER}/${REPO_PREFIX}-${PKGMGR_DISTRO}"
local alias_tag_base=""
if [[ -n "${TARGET}" ]]; then
distro_tag_base="${distro_tag_base}-${TARGET}"
fi
if [[ "${distro}" == "${DEFAULT_DISTRO}" ]]; then
if [[ "${PKGMGR_DISTRO}" == "${DEFAULT_DISTRO}" ]]; then
alias_tag_base="${REGISTRY}/${OWNER}/${REPO_PREFIX}"
if [[ -n "${TARGET}" ]]; then
alias_tag_base="${alias_tag_base}-${TARGET}"

View File

@@ -30,11 +30,11 @@ echo "[publish] DISTROS=${DISTROS}"
for d in ${DISTROS}; do
echo
echo "============================================================"
echo "[publish] distro=${d}"
echo "[publish] PKGMGR_DISTRO=${d}"
echo "============================================================"
# virgin
distro="${d}" bash "${SCRIPT_DIR}/image.sh" \
PKGMGR_DISTRO="${d}" bash "${SCRIPT_DIR}/image.sh" \
--publish \
--registry "${REGISTRY}" \
--owner "${OWNER}" \
@@ -43,7 +43,7 @@ for d in ${DISTROS}; do
--target virgin
# full (default target)
distro="${d}" bash "${SCRIPT_DIR}/image.sh" \
PKGMGR_DISTRO="${d}" bash "${SCRIPT_DIR}/image.sh" \
--publish \
--registry "${REGISTRY}" \
--owner "${OWNER}" \

View File

@@ -1,8 +1,6 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
echo "[docker] Starting package-manager container"
# ---------------------------------------------------------------------------

View File

@@ -1,11 +1,6 @@
#!/usr/bin/env bash
set -euo pipefail
# Ensure NIX_CONFIG has our defaults if not already set
if [[ -z "${NIX_CONFIG:-}" ]]; then
export NIX_CONFIG="experimental-features = nix-command flakes"
fi
FLAKE_DIR="/usr/lib/package-manager"
# ---------------------------------------------------------------------------
@@ -43,6 +38,6 @@ if command -v nix >/dev/null 2>&1; then
exec nix run "${FLAKE_DIR}#pkgmgr" -- "$@"
fi
echo "[pkgmgr-wrapper] ERROR: 'nix' binary not found on PATH after init."
echo "[pkgmgr-wrapper] Nix is required to run pkgmgr (no Python fallback)."
echo "[launcher] ERROR: 'nix' binary not found on PATH after init."
echo "[launcher] Nix is required to run pkgmgr (no Python fallback)."
exit 1

View File

@@ -22,7 +22,7 @@ It is invoked during package installation (Arch/Debian/Fedora scriptlets) and ca
The entry point sources small, focused modules from *scripts/nix/lib/*:
- *config.sh* — configuration defaults (installer URL, retry timing)
- *bootstrap_config.sh* — configuration defaults (installer URL, retry timing)
- *detect.sh* — container detection helpers
- *path.sh* — PATH adjustments and `nix` binary resolution helpers
- *symlinks.sh* — user/global symlink helpers for stable `nix` discovery

View File

@@ -1,22 +1,29 @@
#!/usr/bin/env bash
set -euo pipefail
# shellcheck source=lib/config.sh
# shellcheck source=lib/detect.sh
# shellcheck source=lib/path.sh
# shellcheck source=lib/symlinks.sh
# shellcheck source=lib/users.sh
# shellcheck source=lib/install.sh
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/lib/config.sh"
# shellcheck source=./scripts/nix/lib/bootstrap_config.sh
source "${SCRIPT_DIR}/lib/bootstrap_config.sh"
# shellcheck source=./scripts/nix/lib/detect.sh
source "${SCRIPT_DIR}/lib/detect.sh"
# shellcheck source=./scripts/nix/lib/path.sh
source "${SCRIPT_DIR}/lib/path.sh"
# shellcheck source=./scripts/nix/lib/symlinks.sh
source "${SCRIPT_DIR}/lib/symlinks.sh"
# shellcheck source=./scripts/nix/lib/users.sh
source "${SCRIPT_DIR}/lib/users.sh"
# shellcheck source=./scripts/nix/lib/install.sh
source "${SCRIPT_DIR}/lib/install.sh"
# shellcheck source=./scripts/nix/lib/nix_conf_file.sh
source "${SCRIPT_DIR}/lib/nix_conf_file.sh"
echo "[init-nix] Starting Nix initialization..."
main() {
@@ -26,6 +33,7 @@ main() {
ensure_nix_on_path
if [[ "${EUID:-0}" -eq 0 ]]; then
nixconf_ensure_experimental_features
ensure_global_nix_symlinks "$(resolve_nix_bin 2>/dev/null || true)"
else
ensure_user_nix_symlink "$(resolve_nix_bin 2>/dev/null || true)"
@@ -106,6 +114,10 @@ main() {
# -------------------------------------------------------------------------
ensure_nix_on_path
if [[ "${EUID:-0}" -eq 0 ]]; then
nixconf_ensure_experimental_features
fi
local nix_bin_post
nix_bin_post="$(resolve_nix_bin 2>/dev/null || true)"

View File

@@ -0,0 +1,89 @@
#!/usr/bin/env bash
set -euo pipefail
# Prevent double-sourcing
if [[ -n "${PKGMGR_NIX_CONF_FILE_SH:-}" ]]; then
return 0
fi
PKGMGR_NIX_CONF_FILE_SH=1
nixconf_file_path() {
echo "/etc/nix/nix.conf"
}
# Ensure a given nix.conf key contains required tokens (merged, no duplicates)
nixconf_ensure_features_key() {
local nix_conf="$1"
local key="$2"
shift 2
local required=("$@")
mkdir -p /etc/nix
# Create file if missing (with just the required tokens)
if [[ ! -f "${nix_conf}" ]]; then
local want="${key} = ${required[*]}"
echo "[nix-conf] Creating ${nix_conf} with: ${want}"
printf "%s\n" "${want}" >"${nix_conf}"
return 0
fi
# Key exists -> merge tokens
if grep -qE "^\s*${key}\s*=" "${nix_conf}"; then
local ok=1
local t
for t in "${required[@]}"; do
if ! grep -qE "^\s*${key}\s*=.*\b${t}\b" "${nix_conf}"; then
ok=0
break
fi
done
if [[ "$ok" -eq 1 ]]; then
echo "[nix-conf] ${key} already correct"
return 0
fi
echo "[nix-conf] Extending ${key} in ${nix_conf}"
local current
current="$(grep -E "^\s*${key}\s*=" "${nix_conf}" | head -n1 | cut -d= -f2-)"
current="$(echo "${current}" | xargs)" # trim
local merged=""
local token
# Start with existing tokens
for token in ${current}; do
if [[ " ${merged} " != *" ${token} "* ]]; then
merged="${merged} ${token}"
fi
done
# Add required tokens
for token in "${required[@]}"; do
if [[ " ${merged} " != *" ${token} "* ]]; then
merged="${merged} ${token}"
fi
done
merged="$(echo "${merged}" | xargs)" # trim
sed -i "s|^\s*${key}\s*=.*|${key} = ${merged}|" "${nix_conf}"
return 0
fi
# Key missing -> append
local want="${key} = ${required[*]}"
echo "[nix-conf] Appending to ${nix_conf}: ${want}"
printf "\n%s\n" "${want}" >>"${nix_conf}"
}
nixconf_ensure_experimental_features() {
local nix_conf
nix_conf="$(nixconf_file_path)"
# Ensure both keys to avoid prompts and cover older/alternate expectations
nixconf_ensure_features_key "${nix_conf}" "experimental-features" "nix-command" "flakes"
nixconf_ensure_features_key "${nix_conf}" "extra-experimental-features" "nix-command" "flakes"
}

52
scripts/nix/lib/retry_403.sh Executable file
View File

@@ -0,0 +1,52 @@
#!/usr/bin/env bash
set -euo pipefail
if [[ -n "${PKGMGR_NIX_RETRY_403_SH:-}" ]]; then
return 0
fi
PKGMGR_NIX_RETRY_403_SH=1
# Retry only when we see the GitHub API rate limit 403 error during nix flake evaluation.
# Retries 7 times with delays: 10, 30, 50, 80, 130, 210, 420 seconds.
run_with_github_403_retry() {
local -a delays=(10 30 50 80 130 210 420)
local attempt=0
local max_retries="${#delays[@]}"
while true; do
local err tmp
tmp="$(mktemp -t nix-err.XXXXXX)"
err=0
# Run the command; capture stderr for inspection while preserving stdout.
if "$@" 2>"$tmp"; then
rm -f "$tmp"
return 0
else
err=$?
fi
# Only retry on the specific GitHub API rate limit 403 case.
if grep -qE 'HTTP error 403' "$tmp" && grep -qiE 'API rate limit exceeded|api\.github\.com' "$tmp"; then
if (( attempt >= max_retries )); then
cat "$tmp" >&2
rm -f "$tmp"
return "$err"
fi
local sleep_s="${delays[$attempt]}"
attempt=$((attempt + 1))
echo "[nix-retry] GitHub API rate-limit (403). Retry ${attempt}/${max_retries} in ${sleep_s}s: $*" >&2
cat "$tmp" >&2
rm -f "$tmp"
sleep "$sleep_s"
continue
fi
# Not our retry case -> fail fast with original stderr.
cat "$tmp" >&2
rm -f "$tmp"
return "$err"
done
}

View File

@@ -1,9 +1,11 @@
#!/usr/bin/env bash
# ------------------------------------------------------------
# Nix shell mode: do not touch venv, only run main.py install
# Nix shell mode: do not touch venv, only run install
# ------------------------------------------------------------
echo "[setup] Nix mode enabled (NIX_ENABLED=1)."
echo "[setup] Skipping virtualenv creation and dependency installation."
echo "[setup] Running main.py install via system python3..."
python3 main.py install
echo "[setup] Running install via system python3..."
python3 -m pkgmgr install
echo "[setup] Setup finished (Nix mode)."

View File

@@ -7,6 +7,7 @@ PROJECT_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
cd "${PROJECT_ROOT}"
VENV_DIR="${HOME}/.venvs/pkgmgr"
# shellcheck disable=SC2016
RC_LINE='if [ -d "${HOME}/.venvs/pkgmgr" ]; then . "${HOME}/.venvs/pkgmgr/bin/activate"; if [ -n "${PS1:-}" ]; then echo "Global Python virtual environment '\''~/.venvs/pkgmgr'\'' activated."; fi; fi'
# ------------------------------------------------------------
@@ -15,9 +16,6 @@ RC_LINE='if [ -d "${HOME}/.venvs/pkgmgr" ]; then . "${HOME}/.venvs/pkgmgr/bin/ac
echo "[setup] Running in normal user mode (developer setup)."
echo "[setup] Ensuring main.py is executable..."
chmod +x main.py || true
echo "[setup] Ensuring global virtualenv root: ${HOME}/.venvs"
mkdir -p "${HOME}/.venvs"
@@ -90,8 +88,8 @@ for rc in "${HOME}/.bashrc" "${HOME}/.zshrc"; do
fi
done
echo "[setup] Running main.py install via venv Python..."
"${VENV_DIR}/bin/python" main.py install
echo "[setup] Running install via venv Python..."
"${VENV_DIR}/bin/python" -m pkgmgr install
echo
echo "[setup] Developer setup complete."

View File

@@ -2,17 +2,17 @@
set -euo pipefail
echo "============================================================"
echo ">>> Running E2E tests: $distro"
echo ">>> Running E2E tests: $PKGMGR_DISTRO"
echo "============================================================"
docker run --rm \
-v "$(pwd):/src" \
-v "pkgmgr_nix_store_${distro}:/nix" \
-v "pkgmgr_nix_cache_${distro}:/root/.cache/nix" \
-v "pkgmgr_nix_store_${PKGMGR_DISTRO}:/nix" \
-v "pkgmgr_nix_cache_${PKGMGR_DISTRO}:/root/.cache/nix" \
-e REINSTALL_PKGMGR=1 \
-e TEST_PATTERN="${TEST_PATTERN}" \
--workdir /src \
"pkgmgr-${distro}" \
"pkgmgr-${PKGMGR_DISTRO}" \
bash -lc '
set -euo pipefail
@@ -49,7 +49,7 @@ docker run --rm \
# Gitdir path shown in the "dubious ownership" error
git config --global --add safe.directory /src/.git || true
# Ephemeral CI containers: allow all paths as a last resort
git config --global --add safe.directory '*' || true
git config --global --add safe.directory "*" || true
fi
# Run the E2E tests inside the Nix development shell

View File

@@ -1,17 +1,17 @@
#!/usr/bin/env bash
set -euo pipefail
IMAGE="pkgmgr-${distro}"
IMAGE="pkgmgr-${PKGMGR_DISTRO}"
echo "============================================================"
echo ">>> Running Nix flake-only test in ${distro} container"
echo ">>> Running Nix flake-only test in ${PKGMGR_DISTRO} container"
echo ">>> Image: ${IMAGE}"
echo "============================================================"
docker run --rm \
-v "$(pwd):/src" \
-v "pkgmgr_nix_store_${distro}:/nix" \
-v "pkgmgr_nix_cache_${distro}:/root/.cache/nix" \
-v "pkgmgr_nix_store_${PKGMGR_DISTRO}:/nix" \
-v "pkgmgr_nix_cache_${PKGMGR_DISTRO}:/root/.cache/nix" \
--workdir /src \
-e REINSTALL_PKGMGR=1 \
"${IMAGE}" \
@@ -27,7 +27,7 @@ docker run --rm \
echo ">>> preflight: nix must exist in image"
if ! command -v nix >/dev/null 2>&1; then
echo "NO_NIX"
echo "ERROR: nix not found in image '\'''"${IMAGE}"''\'' (distro='"${distro}"')"
echo "ERROR: nix not found in image '"${IMAGE}"' (PKGMGR_DISTRO='"${PKGMGR_DISTRO}"')"
echo "HINT: Ensure Nix is installed during image build for this distro."
exit 1
fi
@@ -35,14 +35,28 @@ docker run --rm \
echo ">>> nix version"
nix --version
# ------------------------------------------------------------
# Retry helper for GitHub API rate-limit (HTTP 403)
# ------------------------------------------------------------
if [[ -f /src/scripts/nix/lib/retry_403.sh ]]; then
# shellcheck source=./scripts/nix/lib/retry_403.sh
source /src/scripts/nix/lib/retry_403.sh
elif [[ -f ./scripts/nix/lib/retry_403.sh ]]; then
# shellcheck source=./scripts/nix/lib/retry_403.sh
source ./scripts/nix/lib/retry_403.sh
else
echo "ERROR: retry helper not found: scripts/nix/lib/retry_403.sh"
exit 1
fi
echo ">>> nix flake show"
nix flake show . --no-write-lock-file >/dev/null
run_with_github_403_retry nix flake show . --no-write-lock-file >/dev/null
echo ">>> nix build .#default"
nix build .#default --no-link --no-write-lock-file
run_with_github_403_retry nix build .#default --no-link --no-write-lock-file
echo ">>> nix run .#pkgmgr -- --help"
nix run .#pkgmgr -- --help --no-write-lock-file
run_with_github_403_retry nix run .#pkgmgr -- --help --no-write-lock-file
echo ">>> OK: Nix flake-only test succeeded."
'

View File

@@ -1,7 +1,7 @@
#!/usr/bin/env bash
set -euo pipefail
IMAGE="pkgmgr-$distro"
IMAGE="pkgmgr-$PKGMGR_DISTRO"
echo
echo "------------------------------------------------------------"
@@ -16,9 +16,9 @@ echo
# Run the command and capture the output
if OUTPUT=$(docker run --rm \
-e REINSTALL_PKGMGR=1 \
-v pkgmgr_nix_store_${distro}:/nix \
-v "pkgmgr_nix_store_${PKGMGR_DISTRO}:/nix" \
-v "$(pwd):/src" \
-v "pkgmgr_nix_cache_${distro}:/root/.cache/nix" \
-v "pkgmgr_nix_cache_${PKGMGR_DISTRO}:/root/.cache/nix" \
"$IMAGE" 2>&1); then
echo "$OUTPUT"
echo

View File

@@ -2,17 +2,17 @@
set -euo pipefail
echo "============================================================"
echo ">>> Running INTEGRATION tests in ${distro} container"
echo ">>> Running INTEGRATION tests in ${PKGMGR_DISTRO} container"
echo "============================================================"
docker run --rm \
-v "$(pwd):/src" \
-v pkgmgr_nix_store_${distro}:/nix \
-v "pkgmgr_nix_cache_${distro}:/root/.cache/nix" \
-v "pkgmgr_nix_store_${PKGMGR_DISTRO}:/nix" \
-v "pkgmgr_nix_cache_${PKGMGR_DISTRO}:/root/.cache/nix" \
--workdir /src \
-e REINSTALL_PKGMGR=1 \
-e TEST_PATTERN="${TEST_PATTERN}" \
"pkgmgr-${distro}" \
"pkgmgr-${PKGMGR_DISTRO}" \
bash -lc '
set -e;
git config --global --add safe.directory /src || true;

View File

@@ -2,17 +2,17 @@
set -euo pipefail
echo "============================================================"
echo ">>> Running UNIT tests in ${distro} container"
echo ">>> Running UNIT tests in ${PKGMGR_DISTRO} container"
echo "============================================================"
docker run --rm \
-v "$(pwd):/src" \
-v "pkgmgr_nix_cache_${distro}:/root/.cache/nix" \
-v pkgmgr_nix_store_${distro}:/nix \
-v "pkgmgr_nix_cache_${PKGMGR_DISTRO}:/root/.cache/nix" \
-v "pkgmgr_nix_store_${PKGMGR_DISTRO}:/nix" \
--workdir /src \
-e REINSTALL_PKGMGR=1 \
-e TEST_PATTERN="${TEST_PATTERN}" \
"pkgmgr-${distro}" \
"pkgmgr-${PKGMGR_DISTRO}" \
bash -lc '
set -e;
git config --global --add safe.directory /src || true;

View File

@@ -19,12 +19,20 @@ fi
# ------------------------------------------------------------
# Remove auto-activation lines from shell RC files
# ------------------------------------------------------------
RC_PATTERN='\.venvs\/pkgmgr\/bin\/activate"; if \[ -n "\$${PS1:-}" \]; then echo "Global Python virtual environment '\''~\/\.venvs\/pkgmgr'\'' activated."; fi; fi'
# Matches:
# ~/.venvs/pkgmgr/bin/activate
# ./.venvs/pkgmgr/bin/activate
RC_PATTERN='(\./)?\.venvs/pkgmgr/bin/activate'
echo "[uninstall] Cleaning up ~/.bashrc and ~/.zshrc entries..."
for rc in "$HOME/.bashrc" "$HOME/.zshrc"; do
if [[ -f "$rc" ]]; then
sed -i "/$RC_PATTERN/d" "$rc"
# Remove activation lines (functional)
sed -E -i "/$RC_PATTERN/d" "$rc"
# Remove leftover echo / cosmetic lines referencing pkgmgr venv
sed -i '/\.venvs\/pkgmgr/d' "$rc"
echo "[uninstall] Cleaned $rc"
else
echo "[uninstall] File not found: $rc (skipped)"

5
src/pkgmgr/__main__.py Executable file
View File

@@ -0,0 +1,5 @@
#!/usr/bin/env python3
from pkgmgr.cli import main
if __name__ == "__main__":
main()

View File

@@ -1,5 +1,6 @@
import yaml
import os
from pkgmgr.core.config.save import save_user_config
def interactive_add(config,USER_CONFIG_PATH:str):
"""Interactively prompt the user to add a new repository entry to the user config."""

View File

@@ -45,7 +45,7 @@ def config_init(
# Announce where we will write the result
# ------------------------------------------------------------
print("============================================================")
print(f"[INIT] Writing user configuration to:")
print("[INIT] Writing user configuration to:")
print(f" {user_config_path}")
print("============================================================")
@@ -53,7 +53,7 @@ def config_init(
defaults_config["directories"]["repositories"]
)
print(f"[INIT] Scanning repository base directory:")
print("[INIT] Scanning repository base directory:")
print(f" {repositories_base_dir}")
print("")
@@ -173,7 +173,7 @@ def config_init(
if new_entries:
user_config.setdefault("repositories", []).extend(new_entries)
save_user_config(user_config, user_config_path)
print(f"[SAVE] Wrote user configuration to:")
print("[SAVE] Wrote user configuration to:")
print(f" {user_config_path}")
else:
print("[INFO] No new repositories were added.")

View File

@@ -1,3 +1,4 @@
# src/pkgmgr/actions/install/__init__.py
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
@@ -27,7 +28,7 @@ from pkgmgr.actions.install.installers.os_packages import (
DebianControlInstaller,
RpmSpecInstaller,
)
from pkgmgr.actions.install.installers.nix_flake import (
from pkgmgr.actions.install.installers.nix import (
NixFlakeInstaller,
)
from pkgmgr.actions.install.installers.python import PythonInstaller
@@ -36,10 +37,8 @@ from pkgmgr.actions.install.installers.makefile import (
)
from pkgmgr.actions.install.pipeline import InstallationPipeline
Repository = Dict[str, Any]
# All available installers, in the order they should be considered.
INSTALLERS = [
ArchPkgbuildInstaller(),
DebianControlInstaller(),
@@ -50,11 +49,6 @@ INSTALLERS = [
]
# ---------------------------------------------------------------------------
# Internal helpers
# ---------------------------------------------------------------------------
def _ensure_repo_dir(
repo: Repository,
repositories_base_dir: str,
@@ -74,7 +68,7 @@ def _ensure_repo_dir(
if not os.path.exists(repo_dir):
print(
f"Repository directory '{repo_dir}' does not exist. "
f"Cloning it now..."
"Cloning it now..."
)
clone_repos(
[repo],
@@ -87,7 +81,7 @@ def _ensure_repo_dir(
if not os.path.exists(repo_dir):
print(
f"Cloning failed for repository {identifier}. "
f"Skipping installation."
"Skipping installation."
)
return None
@@ -137,6 +131,7 @@ def _create_context(
quiet: bool,
clone_mode: str,
update_dependencies: bool,
force_update: bool,
) -> RepoContext:
"""
Build a RepoContext instance for the given repository.
@@ -153,14 +148,10 @@ def _create_context(
quiet=quiet,
clone_mode=clone_mode,
update_dependencies=update_dependencies,
force_update=force_update,
)
# ---------------------------------------------------------------------------
# Public API
# ---------------------------------------------------------------------------
def install_repos(
selected_repos: List[Repository],
repositories_base_dir: str,
@@ -171,10 +162,14 @@ def install_repos(
quiet: bool,
clone_mode: str,
update_dependencies: bool,
force_update: bool = False,
) -> None:
"""
Install one or more repositories according to the configured installers
and the CLI layer precedence rules.
If force_update=True, installers of the currently active layer are allowed
to run again (upgrade/refresh), even if that layer is already loaded.
"""
pipeline = InstallationPipeline(INSTALLERS)
@@ -213,6 +208,7 @@ def install_repos(
quiet=quiet,
clone_mode=clone_mode,
update_dependencies=update_dependencies,
force_update=force_update,
)
pipeline.run(ctx)

View File

@@ -1,3 +1,4 @@
# src/pkgmgr/actions/install/context.py
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
@@ -28,3 +29,6 @@ class RepoContext:
quiet: bool
clone_mode: str
update_dependencies: bool
# If True, allow re-running installers of the currently active layer.
force_update: bool = False

View File

@@ -9,7 +9,7 @@ pkgmgr.actions.install.installers.
"""
from pkgmgr.actions.install.installers.base import BaseInstaller # noqa: F401
from pkgmgr.actions.install.installers.nix_flake import NixFlakeInstaller # noqa: F401
from pkgmgr.actions.install.installers.nix import NixFlakeInstaller # noqa: F401
from pkgmgr.actions.install.installers.python import PythonInstaller # noqa: F401
from pkgmgr.actions.install.installers.makefile import MakefileInstaller # noqa: F401

View File

@@ -1,3 +1,4 @@
# src/pkgmgr/actions/install/installers/makefile.py
from __future__ import annotations
import os
@@ -9,89 +10,45 @@ from pkgmgr.core.command.run import run_command
class MakefileInstaller(BaseInstaller):
"""
Generic installer that runs `make install` if a Makefile with an
install target is present.
Safety rules:
- If PKGMGR_DISABLE_MAKEFILE_INSTALLER=1 is set, this installer
is globally disabled.
- The higher-level InstallationPipeline ensures that Makefile
installation does not run if a stronger CLI layer already owns
the command (e.g. Nix or OS packages).
"""
layer = "makefile"
MAKEFILE_NAME = "Makefile"
def supports(self, ctx: RepoContext) -> bool:
"""
Return True if this repository has a Makefile and the installer
is not globally disabled.
"""
# Optional global kill switch.
if os.environ.get("PKGMGR_DISABLE_MAKEFILE_INSTALLER") == "1":
if not ctx.quiet:
print(
"[INFO] MakefileInstaller is disabled via "
"PKGMGR_DISABLE_MAKEFILE_INSTALLER."
)
print("[INFO] PKGMGR_DISABLE_MAKEFILE_INSTALLER=1 skipping MakefileInstaller.")
return False
makefile_path = os.path.join(ctx.repo_dir, self.MAKEFILE_NAME)
return os.path.exists(makefile_path)
def _has_install_target(self, makefile_path: str) -> bool:
"""
Heuristically check whether the Makefile defines an install target.
We look for:
- a plain 'install:' target, or
- any 'install-*:' style target.
"""
try:
with open(makefile_path, "r", encoding="utf-8", errors="ignore") as f:
content = f.read()
except OSError:
return False
# Simple heuristics: look for "install:" or targets starting with "install-"
if re.search(r"^install\s*:", content, flags=re.MULTILINE):
return True
if re.search(r"^install-[a-zA-Z0-9_-]*\s*:", content, flags=re.MULTILINE):
return True
return False
def run(self, ctx: RepoContext) -> None:
"""
Execute `make install` in the repository directory if an install
target exists.
"""
makefile_path = os.path.join(ctx.repo_dir, self.MAKEFILE_NAME)
if not os.path.exists(makefile_path):
if not ctx.quiet:
print(
f"[pkgmgr] Makefile '{makefile_path}' not found, "
"skipping MakefileInstaller."
)
return
if not self._has_install_target(makefile_path):
if not ctx.quiet:
print(
f"[pkgmgr] No 'install' target found in {makefile_path}."
)
print(f"[pkgmgr] No 'install' target found in {makefile_path}.")
return
if not ctx.quiet:
print(
f"[pkgmgr] Running 'make install' in {ctx.repo_dir} "
f"(MakefileInstaller)"
)
print(f"[pkgmgr] Running make install for {ctx.identifier} (MakefileInstaller)")
cmd = "make install"
run_command(cmd, cwd=ctx.repo_dir, preview=ctx.preview)
run_command("make install", cwd=ctx.repo_dir, preview=ctx.preview)
if ctx.force_update and not ctx.quiet:
print(f"[makefile] repo '{ctx.identifier}' successfully upgraded.")

View File

@@ -0,0 +1,4 @@
from .installer import NixFlakeInstaller
from .retry import RetryPolicy
__all__ = ["NixFlakeInstaller", "RetryPolicy"]

View File

@@ -0,0 +1,168 @@
# src/pkgmgr/actions/install/installers/nix/installer.py
from __future__ import annotations
import os
import shutil
from typing import List, Tuple, TYPE_CHECKING
from pkgmgr.actions.install.installers.base import BaseInstaller
from .profile import NixProfileInspector
from .retry import GitHubRateLimitRetry, RetryPolicy
from .runner import CommandRunner
if TYPE_CHECKING:
from pkgmgr.actions.install.context import RepoContext
class NixFlakeInstaller(BaseInstaller):
layer = "nix"
FLAKE_FILE = "flake.nix"
def __init__(self, policy: RetryPolicy | None = None) -> None:
self._runner = CommandRunner()
self._retry = GitHubRateLimitRetry(policy=policy)
self._profile = NixProfileInspector()
# ------------------------------------------------------------------ #
# Compatibility: supports()
# ------------------------------------------------------------------ #
def supports(self, ctx: "RepoContext") -> bool:
if os.environ.get("PKGMGR_DISABLE_NIX_FLAKE_INSTALLER") == "1":
if not ctx.quiet:
print("[INFO] PKGMGR_DISABLE_NIX_FLAKE_INSTALLER=1 skipping NixFlakeInstaller.")
return False
if shutil.which("nix") is None:
return False
return os.path.exists(os.path.join(ctx.repo_dir, self.FLAKE_FILE))
# ------------------------------------------------------------------ #
# Compatibility: output selection
# ------------------------------------------------------------------ #
def _profile_outputs(self, ctx: "RepoContext") -> List[Tuple[str, bool]]:
# (output_name, allow_failure)
if ctx.identifier in {"pkgmgr", "package-manager"}:
return [("pkgmgr", False), ("default", True)]
return [("default", False)]
# ------------------------------------------------------------------ #
# Compatibility: run()
# ------------------------------------------------------------------ #
def run(self, ctx: "RepoContext") -> None:
if not self.supports(ctx):
return
outputs = self._profile_outputs(ctx)
if not ctx.quiet:
print(
"[nix] flake detected in "
f"{ctx.identifier}, ensuring outputs: "
+ ", ".join(name for name, _ in outputs)
)
for output, allow_failure in outputs:
if ctx.force_update:
self._force_upgrade_output(ctx, output, allow_failure)
else:
self._install_only(ctx, output, allow_failure)
# ------------------------------------------------------------------ #
# Core logic (unchanged semantics)
# ------------------------------------------------------------------ #
def _installable(self, ctx: "RepoContext", output: str) -> str:
return f"{ctx.repo_dir}#{output}"
def _install_only(self, ctx: "RepoContext", output: str, allow_failure: bool) -> None:
install_cmd = f"nix profile install {self._installable(ctx, output)}"
if not ctx.quiet:
print(f"[nix] install: {install_cmd}")
res = self._retry.run_with_retry(ctx, self._runner, install_cmd)
if res.returncode == 0:
if not ctx.quiet:
print(f"[nix] output '{output}' successfully installed.")
return
if not ctx.quiet:
print(
f"[nix] install failed for '{output}' (exit {res.returncode}), "
"trying index-based upgrade/remove+install..."
)
indices = self._profile.find_installed_indices_for_output(ctx, self._runner, output)
upgraded = False
for idx in indices:
if self._upgrade_index(ctx, idx):
upgraded = True
if not ctx.quiet:
print(f"[nix] output '{output}' successfully upgraded (index {idx}).")
if upgraded:
return
if indices and not ctx.quiet:
print(f"[nix] upgrade failed; removing indices {indices} and reinstalling '{output}'.")
for idx in indices:
self._remove_index(ctx, idx)
final = self._runner.run(ctx, install_cmd, allow_failure=True)
if final.returncode == 0:
if not ctx.quiet:
print(f"[nix] output '{output}' successfully re-installed.")
return
print(f"[ERROR] Failed to install Nix flake output '{output}' (exit {final.returncode})")
if not allow_failure:
raise SystemExit(final.returncode)
print(f"[WARNING] Continuing despite failure of optional output '{output}'.")
# ------------------------------------------------------------------ #
# force_update path (unchanged semantics)
# ------------------------------------------------------------------ #
def _force_upgrade_output(self, ctx: "RepoContext", output: str, allow_failure: bool) -> None:
indices = self._profile.find_installed_indices_for_output(ctx, self._runner, output)
upgraded_any = False
for idx in indices:
if self._upgrade_index(ctx, idx):
upgraded_any = True
if not ctx.quiet:
print(f"[nix] output '{output}' successfully upgraded (index {idx}).")
if upgraded_any:
print(f"[nix] output '{output}' successfully upgraded.")
return
if indices and not ctx.quiet:
print(f"[nix] upgrade failed; removing indices {indices} and reinstalling '{output}'.")
for idx in indices:
self._remove_index(ctx, idx)
self._install_only(ctx, output, allow_failure)
print(f"[nix] output '{output}' successfully upgraded.")
# ------------------------------------------------------------------ #
# Helpers
# ------------------------------------------------------------------ #
def _upgrade_index(self, ctx: "RepoContext", idx: int) -> bool:
res = self._runner.run(ctx, f"nix profile upgrade --refresh {idx}", allow_failure=True)
return res.returncode == 0
def _remove_index(self, ctx: "RepoContext", idx: int) -> None:
self._runner.run(ctx, f"nix profile remove {idx}", allow_failure=True)

View File

@@ -0,0 +1,71 @@
from __future__ import annotations
import json
from typing import Any, List, TYPE_CHECKING
if TYPE_CHECKING:
from pkgmgr.actions.install.context import RepoContext
from .runner import CommandRunner
class NixProfileInspector:
"""
Reads and interprets `nix profile list --json` and provides helpers for
finding indices matching a given output name.
"""
def find_installed_indices_for_output(self, ctx: "RepoContext", runner: "CommandRunner", output: str) -> List[int]:
res = runner.run(ctx, "nix profile list --json", allow_failure=True)
if res.returncode != 0:
return []
try:
data = json.loads(res.stdout or "{}")
except json.JSONDecodeError:
return []
indices: List[int] = []
elements = data.get("elements")
if isinstance(elements, dict):
for idx_str, elem in elements.items():
try:
idx = int(idx_str)
except (TypeError, ValueError):
continue
if self._element_matches_output(elem, output):
indices.append(idx)
return sorted(indices)
if isinstance(elements, list):
for elem in elements:
idx = elem.get("index") if isinstance(elem, dict) else None
if isinstance(idx, int) and self._element_matches_output(elem, output):
indices.append(idx)
return sorted(indices)
return []
@staticmethod
def element_matches_output(elem: Any, output: str) -> bool:
return NixProfileInspector._element_matches_output(elem, output)
@staticmethod
def _element_matches_output(elem: Any, output: str) -> bool:
out = (output or "").strip()
if not out or not isinstance(elem, dict):
return False
candidates: List[str] = []
for k in ("attrPath", "originalUrl", "url", "storePath", "name"):
v = elem.get(k)
if isinstance(v, str) and v:
candidates.append(v)
for c in candidates:
if c == out:
return True
if f"#{out}" in c:
return True
return False

View File

@@ -0,0 +1,87 @@
from __future__ import annotations
import random
import time
from dataclasses import dataclass
from typing import Iterable, TYPE_CHECKING
from .types import RunResult
if TYPE_CHECKING:
from pkgmgr.actions.install.context import RepoContext
from .runner import CommandRunner
@dataclass(frozen=True)
class RetryPolicy:
max_attempts: int = 7
base_delay_seconds: int = 30
jitter_seconds_min: int = 0
jitter_seconds_max: int = 60
class GitHubRateLimitRetry:
"""
Retries nix install commands only when the error looks like a GitHub API rate limit (HTTP 403).
Backoff: Fibonacci(base, base, ...) + random jitter.
"""
def __init__(self, policy: RetryPolicy | None = None) -> None:
self._policy = policy or RetryPolicy()
def run_with_retry(
self,
ctx: "RepoContext",
runner: "CommandRunner",
install_cmd: str,
) -> RunResult:
quiet = bool(getattr(ctx, "quiet", False))
delays = list(self._fibonacci_backoff(self._policy.base_delay_seconds, self._policy.max_attempts))
last: RunResult | None = None
for attempt, base_delay in enumerate(delays, start=1):
if not quiet:
print(f"[nix] attempt {attempt}/{self._policy.max_attempts}: {install_cmd}")
res = runner.run(ctx, install_cmd, allow_failure=True)
last = res
if res.returncode == 0:
return res
combined = f"{res.stdout}\n{res.stderr}"
if not self._is_github_rate_limit_error(combined):
return res
if attempt >= self._policy.max_attempts:
break
jitter = random.randint(self._policy.jitter_seconds_min, self._policy.jitter_seconds_max)
wait_time = base_delay + jitter
if not quiet:
print(
"[nix] GitHub rate limit detected (403). "
f"Retrying in {wait_time}s (base={base_delay}s, jitter={jitter}s)..."
)
time.sleep(wait_time)
return last if last is not None else RunResult(returncode=1, stdout="", stderr="nix install retry failed")
@staticmethod
def _is_github_rate_limit_error(text: str) -> bool:
t = (text or "").lower()
return (
"http error 403" in t
or "rate limit exceeded" in t
or "github api rate limit" in t
or "api rate limit exceeded" in t
)
@staticmethod
def _fibonacci_backoff(base: int, attempts: int) -> Iterable[int]:
a, b = base, base
for _ in range(max(1, attempts)):
yield a
a, b = b, a + b

View File

@@ -0,0 +1,64 @@
from __future__ import annotations
import subprocess
from typing import TYPE_CHECKING
from .types import RunResult
if TYPE_CHECKING:
from pkgmgr.actions.install.context import RepoContext
class CommandRunner:
"""
Executes commands (shell=True) inside a repository directory (if provided).
Supports preview mode and compact failure output logging.
"""
def run(self, ctx: "RepoContext", cmd: str, allow_failure: bool) -> RunResult:
repo_dir = getattr(ctx, "repo_dir", None) or getattr(ctx, "repo_path", None)
preview = bool(getattr(ctx, "preview", False))
quiet = bool(getattr(ctx, "quiet", False))
if preview:
if not quiet:
print(f"[preview] {cmd}")
return RunResult(returncode=0, stdout="", stderr="")
try:
p = subprocess.run(
cmd,
shell=True,
cwd=repo_dir,
check=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
)
except Exception as e:
if not allow_failure:
raise
return RunResult(returncode=1, stdout="", stderr=str(e))
res = RunResult(returncode=p.returncode, stdout=p.stdout or "", stderr=p.stderr or "")
if res.returncode != 0 and not quiet:
self._print_compact_failure(res)
if res.returncode != 0 and not allow_failure:
raise SystemExit(res.returncode)
return res
@staticmethod
def _print_compact_failure(res: RunResult) -> None:
out = (res.stdout or "").strip()
err = (res.stderr or "").strip()
if out:
print("[nix] stdout (last lines):")
print("\n".join(out.splitlines()[-20:]))
if err:
print("[nix] stderr (last lines):")
print("\n".join(err.splitlines()[-40:]))

View File

@@ -0,0 +1,10 @@
from __future__ import annotations
from dataclasses import dataclass
@dataclass(frozen=True)
class RunResult:
returncode: int
stdout: str
stderr: str

View File

@@ -1,165 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Installer for Nix flakes.
If a repository contains flake.nix and the 'nix' command is available, this
installer will try to install profile outputs from the flake.
Behavior:
- If flake.nix is present and `nix` exists on PATH:
* First remove any existing `package-manager` profile entry (best-effort).
* Then install one or more flake outputs via `nix profile install`.
- For the package-manager repo:
* `pkgmgr` is mandatory (CLI), `default` is optional.
- For all other repos:
* `default` is mandatory.
Special handling:
- If PKGMGR_DISABLE_NIX_FLAKE_INSTALLER=1 is set, the installer is
globally disabled (useful for CI or debugging).
The higher-level InstallationPipeline and CLI-layer model decide when this
installer is allowed to run, based on where the current CLI comes from
(e.g. Nix, OS packages, Python, Makefile).
"""
import os
import shutil
from typing import TYPE_CHECKING, List, Tuple
from pkgmgr.actions.install.installers.base import BaseInstaller
from pkgmgr.core.command.run import run_command
if TYPE_CHECKING:
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install import InstallContext
class NixFlakeInstaller(BaseInstaller):
"""Install Nix flake profiles for repositories that define flake.nix."""
# Logical layer name, used by capability matchers.
layer = "nix"
FLAKE_FILE = "flake.nix"
PROFILE_NAME = "package-manager"
def supports(self, ctx: "RepoContext") -> bool:
"""
Only support repositories that:
- Are NOT explicitly disabled via PKGMGR_DISABLE_NIX_FLAKE_INSTALLER=1,
- Have a flake.nix,
- And have the `nix` command available.
"""
# Optional global kill-switch for CI or debugging.
if os.environ.get("PKGMGR_DISABLE_NIX_FLAKE_INSTALLER") == "1":
print(
"[INFO] PKGMGR_DISABLE_NIX_FLAKE_INSTALLER=1 "
"NixFlakeInstaller is disabled."
)
return False
# Nix must be available.
if shutil.which("nix") is None:
return False
# flake.nix must exist in the repository.
flake_path = os.path.join(ctx.repo_dir, self.FLAKE_FILE)
return os.path.exists(flake_path)
def _ensure_old_profile_removed(self, ctx: "RepoContext") -> None:
"""
Best-effort removal of an existing profile entry.
This handles the "already provides the following file" conflict by
removing previous `package-manager` installations before we install
the new one.
Any error in `nix profile remove` is intentionally ignored, because
a missing profile entry is not a fatal condition.
"""
if shutil.which("nix") is None:
return
cmd = f"nix profile remove {self.PROFILE_NAME} || true"
try:
# NOTE: no allow_failure here → matches the existing unit tests
run_command(cmd, cwd=ctx.repo_dir, preview=ctx.preview)
except SystemExit:
# Unit tests explicitly assert this is swallowed
pass
def _profile_outputs(self, ctx: "RepoContext") -> List[Tuple[str, bool]]:
"""
Decide which flake outputs to install and whether failures are fatal.
Returns a list of (output_name, allow_failure) tuples.
Rules:
- For the package-manager repo (identifier 'pkgmgr' or 'package-manager'):
[("pkgmgr", False), ("default", True)]
- For all other repos:
[("default", False)]
"""
ident = ctx.identifier
if ident in {"pkgmgr", "package-manager"}:
# pkgmgr: main CLI output is "pkgmgr" (mandatory),
# "default" is nice-to-have (non-fatal).
return [("pkgmgr", False), ("default", True)]
# Generic repos: we expect a sensible "default" package/app.
# Failure to install it is considered fatal.
return [("default", False)]
def run(self, ctx: "InstallContext") -> None:
"""
Install Nix flake profile outputs.
For the package-manager repo, failure installing 'pkgmgr' is fatal,
failure installing 'default' is non-fatal.
For other repos, failure installing 'default' is fatal.
"""
# Reuse supports() to keep logic in one place.
if not self.supports(ctx): # type: ignore[arg-type]
return
outputs = self._profile_outputs(ctx) # list of (name, allow_failure)
print(
"Nix flake detected in "
f"{ctx.identifier}, attempting to install profile outputs: "
+ ", ".join(name for name, _ in outputs)
)
# Handle the "already installed" case up-front for the shared profile.
self._ensure_old_profile_removed(ctx) # type: ignore[arg-type]
for output, allow_failure in outputs:
cmd = f"nix profile install {ctx.repo_dir}#{output}"
print(f"[INFO] Running: {cmd}")
ret = os.system(cmd)
# Extract real exit code from os.system() result
if os.WIFEXITED(ret):
exit_code = os.WEXITSTATUS(ret)
else:
# abnormal termination (signal etc.) keep raw value
exit_code = ret
if exit_code == 0:
print(f"Nix flake output '{output}' successfully installed.")
continue
print(f"[Error] Failed to install Nix flake output '{output}'")
print(f"[Error] Command exited with code {exit_code}")
if not allow_failure:
raise SystemExit(exit_code)
print(
"[Warning] Continuing despite failure to install "
f"optional output '{output}'."
)

View File

@@ -1,104 +1,40 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
PythonInstaller — install Python projects defined via pyproject.toml.
Installation rules:
1. pip command resolution:
a) If PKGMGR_PIP is set → use it exactly as provided.
b) Else if running inside a virtualenv → use `sys.executable -m pip`.
c) Else → create/use a per-repository virtualenv under ~/.venvs/<repo>/.
2. Installation target:
- Always install into the resolved pip environment.
- Never modify system Python, never rely on --user.
- Nix-immutable systems (PEP 668) are automatically avoided because we
never touch system Python.
3. The installer is skipped when:
- PKGMGR_DISABLE_PYTHON_INSTALLER=1 is set.
- The repository has no pyproject.toml.
All pip failures are treated as fatal.
"""
# src/pkgmgr/actions/install/installers/python.py
from __future__ import annotations
import os
import sys
import subprocess
from typing import TYPE_CHECKING
from pkgmgr.actions.install.installers.base import BaseInstaller
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.core.command.run import run_command
if TYPE_CHECKING:
from pkgmgr.actions.install.context import RepoContext
from pkgmgr.actions.install import InstallContext
class PythonInstaller(BaseInstaller):
"""Install Python projects and dependencies via pip using isolated environments."""
layer = "python"
# ----------------------------------------------------------------------
# Installer activation logic
# ----------------------------------------------------------------------
def supports(self, ctx: "RepoContext") -> bool:
"""
Return True if this installer should handle this repository.
The installer is active only when:
- A pyproject.toml exists in the repo, and
- PKGMGR_DISABLE_PYTHON_INSTALLER is not set.
"""
def supports(self, ctx: RepoContext) -> bool:
if os.environ.get("PKGMGR_DISABLE_PYTHON_INSTALLER") == "1":
print("[INFO] PythonInstaller disabled via PKGMGR_DISABLE_PYTHON_INSTALLER.")
return False
return os.path.exists(os.path.join(ctx.repo_dir, "pyproject.toml"))
# ----------------------------------------------------------------------
# Virtualenv handling
# ----------------------------------------------------------------------
def _in_virtualenv(self) -> bool:
"""Detect whether the current interpreter is inside a venv."""
if os.environ.get("VIRTUAL_ENV"):
return True
base = getattr(sys, "base_prefix", sys.prefix)
return sys.prefix != base
def _ensure_repo_venv(self, ctx: "InstallContext") -> str:
"""
Ensure that ~/.venvs/<identifier>/ exists and contains a minimal venv.
Returns the venv directory path.
"""
def _ensure_repo_venv(self, ctx: RepoContext) -> str:
venv_dir = os.path.expanduser(f"~/.venvs/{ctx.identifier}")
python = sys.executable
if not os.path.isdir(venv_dir):
print(f"[python-installer] Creating virtualenv: {venv_dir}")
subprocess.check_call([python, "-m", "venv", venv_dir])
if not os.path.exists(venv_dir):
run_command(f"{python} -m venv {venv_dir}", preview=ctx.preview)
return venv_dir
# ----------------------------------------------------------------------
# pip command resolution
# ----------------------------------------------------------------------
def _pip_cmd(self, ctx: "InstallContext") -> str:
"""
Determine which pip command to use.
Priority:
1. PKGMGR_PIP override given by user or automation.
2. Active virtualenv → use sys.executable -m pip.
3. Per-repository venv → ~/.venvs/<repo>/bin/pip
"""
def _pip_cmd(self, ctx: RepoContext) -> str:
explicit = os.environ.get("PKGMGR_PIP", "").strip()
if explicit:
return explicit
@@ -107,33 +43,19 @@ class PythonInstaller(BaseInstaller):
return f"{sys.executable} -m pip"
venv_dir = self._ensure_repo_venv(ctx)
pip_path = os.path.join(venv_dir, "bin", "pip")
return pip_path
return os.path.join(venv_dir, "bin", "pip")
# ----------------------------------------------------------------------
# Execution
# ----------------------------------------------------------------------
def run(self, ctx: "InstallContext") -> None:
"""
Install the project defined by pyproject.toml.
Uses the resolved pip environment. Installation is isolated and never
touches system Python.
"""
if not self.supports(ctx): # type: ignore[arg-type]
return
pyproject = os.path.join(ctx.repo_dir, "pyproject.toml")
if not os.path.exists(pyproject):
def run(self, ctx: RepoContext) -> None:
if not self.supports(ctx):
return
print(f"[python-installer] Installing Python project for {ctx.identifier}...")
pip_cmd = self._pip_cmd(ctx)
run_command(f"{pip_cmd} install .", cwd=ctx.repo_dir, preview=ctx.preview)
# Final install command: ALWAYS isolated, never system-wide.
install_cmd = f"{pip_cmd} install ."
run_command(install_cmd, cwd=ctx.repo_dir, preview=ctx.preview)
if ctx.force_update:
# test-visible marker
print(f"[python-installer] repo '{ctx.identifier}' successfully upgraded.")
print(f"[python-installer] Installation finished for {ctx.identifier}.")

View File

@@ -1,21 +1,9 @@
# src/pkgmgr/actions/install/pipeline.py
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Installation pipeline orchestration for repositories.
This module implements the "Setup Controller" logic:
1. Detect current CLI command for the repo (if any).
2. Classify it into a layer (os-packages, nix, python, makefile).
3. Iterate over installers in layer order:
- Skip installers whose layer is weaker than an already-loaded one.
- Run only installers that support() the repo and add new capabilities.
- After each installer, re-resolve the command and update the layer.
4. Maintain the repo["command"] field and create/update symlinks via create_ink().
The goal is to prevent conflicting installations and make the layering
behaviour explicit and testable.
"""
from __future__ import annotations
@@ -36,34 +24,15 @@ from pkgmgr.core.command.resolve import resolve_command_for_repo
@dataclass
class CommandState:
"""
Represents the current CLI state for a repository:
- command: absolute or relative path to the CLI entry point
- layer: which conceptual layer this command belongs to
"""
command: Optional[str]
layer: Optional[CliLayer]
class CommandResolver:
"""
Small helper responsible for resolving the current command for a repo
and mapping it into a CommandState.
"""
def __init__(self, ctx: RepoContext) -> None:
self._ctx = ctx
def resolve(self) -> CommandState:
"""
Resolve the current command for this repository.
If resolve_command_for_repo raises SystemExit (e.g. Python package
without installed entry point), we treat this as "no command yet"
from the point of view of the installers.
"""
repo = self._ctx.repo
identifier = self._ctx.identifier
repo_dir = self._ctx.repo_dir
@@ -85,28 +54,10 @@ class CommandResolver:
class InstallationPipeline:
"""
High-level orchestrator that applies a sequence of installers
to a repository based on CLI layer precedence.
"""
def __init__(self, installers: Sequence[BaseInstaller]) -> None:
self._installers = list(installers)
# ------------------------------------------------------------------
# Public API
# ------------------------------------------------------------------
def run(self, ctx: RepoContext) -> None:
"""
Execute the installation pipeline for a single repository.
- Detect initial command & layer.
- Optionally create a symlink.
- Run installers in order, skipping those whose layer is weaker
than an already-loaded CLI.
- After each installer, re-resolve the command and refresh the
symlink if needed.
"""
repo = ctx.repo
repo_dir = ctx.repo_dir
identifier = ctx.identifier
@@ -119,7 +70,6 @@ class InstallationPipeline:
resolver = CommandResolver(ctx)
state = resolver.resolve()
# Persist initial command (if any) and create a symlink.
if state.command:
repo["command"] = state.command
create_ink(
@@ -135,11 +85,9 @@ class InstallationPipeline:
provided_capabilities: Set[str] = set()
# Main installer loop
for installer in self._installers:
layer_name = getattr(installer, "layer", None)
# Installers without a layer participate without precedence logic.
if layer_name is None:
self._run_installer(installer, ctx, identifier, repo_dir, quiet)
continue
@@ -147,42 +95,33 @@ class InstallationPipeline:
try:
installer_layer = CliLayer(layer_name)
except ValueError:
# Unknown layer string → treat as lowest priority.
installer_layer = None
# "Previous/Current layer already loaded?"
if state.layer is not None and installer_layer is not None:
current_prio = layer_priority(state.layer)
installer_prio = layer_priority(installer_layer)
if current_prio < installer_prio:
# Current CLI comes from a higher-priority layer,
# so we skip this installer entirely.
if not quiet:
print(
f"[pkgmgr] Skipping installer "
"[pkgmgr] Skipping installer "
f"{installer.__class__.__name__} for {identifier} "
f"CLI already provided by layer {state.layer.value!r}."
)
continue
if current_prio == installer_prio:
# Same layer already provides a CLI; usually there is no
# need to run another installer on top of it.
if current_prio == installer_prio and not ctx.force_update:
if not quiet:
print(
f"[pkgmgr] Skipping installer "
"[pkgmgr] Skipping installer "
f"{installer.__class__.__name__} for {identifier} "
f"layer {installer_layer.value!r} is already loaded."
)
continue
# Check if this installer is applicable at all.
if not installer.supports(ctx):
continue
# Capabilities: if everything this installer would provide is already
# covered, we can safely skip it.
caps = installer.discover_capabilities(ctx)
if caps and caps.issubset(provided_capabilities):
if not quiet:
@@ -193,18 +132,22 @@ class InstallationPipeline:
continue
if not quiet:
print(
f"[pkgmgr] Running installer {installer.__class__.__name__} "
f"for {identifier} in '{repo_dir}' "
f"(new capabilities: {caps or set()})..."
)
if ctx.force_update and state.layer is not None and installer_layer == state.layer:
print(
f"[pkgmgr] Running installer {installer.__class__.__name__} "
f"for {identifier} in '{repo_dir}' (upgrade requested)..."
)
else:
print(
f"[pkgmgr] Running installer {installer.__class__.__name__} "
f"for {identifier} in '{repo_dir}' "
f"(new capabilities: {caps or set()})..."
)
# Run the installer with error reporting.
self._run_installer(installer, ctx, identifier, repo_dir, quiet)
provided_capabilities.update(caps)
# After running an installer, re-resolve the command and layer.
new_state = resolver.resolve()
if new_state.command:
repo["command"] = new_state.command
@@ -221,9 +164,6 @@ class InstallationPipeline:
state = new_state
# ------------------------------------------------------------------
# Internal helpers
# ------------------------------------------------------------------
@staticmethod
def _run_installer(
installer: BaseInstaller,
@@ -232,9 +172,6 @@ class InstallationPipeline:
repo_dir: str,
quiet: bool,
) -> None:
"""
Execute a single installer with unified error handling.
"""
try:
installer.run(ctx)
except SystemExit as exc:

View File

@@ -1,5 +1,3 @@
from __future__ import annotations
"""
High-level mirror actions.
@@ -10,6 +8,7 @@ Public API:
- setup_mirrors
"""
from __future__ import annotations
from .types import Repository, MirrorMap
from .list_cmd import list_mirrors
from .diff_cmd import diff_mirrors

View File

@@ -150,7 +150,7 @@ def ensure_origin_remote(
current = current_origin_url(repo_dir)
if current == url or not url:
print(
f"[INFO] 'origin' already points to "
"[INFO] 'origin' already points to "
f"{current or '<unknown>'} (no change needed)."
)
else:

View File

@@ -2,7 +2,7 @@ from __future__ import annotations
import os
from urllib.parse import urlparse
from typing import List, Mapping
from typing import Mapping
from .types import MirrorMap, Repository

View File

@@ -1,14 +1,121 @@
# src/pkgmgr/actions/mirror/setup_cmd.py
from __future__ import annotations
from typing import List, Tuple
from urllib.parse import urlparse
from pkgmgr.core.git import run_git, GitError
from pkgmgr.core.git import GitError, run_git
from pkgmgr.core.remote_provisioning import ProviderHint, RepoSpec, ensure_remote_repo
from pkgmgr.core.remote_provisioning.ensure import EnsureOptions
from .context import build_context
from .git_remote import determine_primary_remote_url, ensure_origin_remote
from .types import Repository
def _probe_mirror(url: str, repo_dir: str) -> Tuple[bool, str]:
"""
Probe a remote mirror URL using `git ls-remote`.
Returns:
(True, "") on success,
(False, error_message) on failure.
"""
try:
run_git(["ls-remote", url], cwd=repo_dir)
return True, ""
except GitError as exc:
return False, str(exc)
def _host_from_git_url(url: str) -> str:
url = (url or "").strip()
if not url:
return ""
if "://" in url:
parsed = urlparse(url)
netloc = (parsed.netloc or "").strip()
if "@" in netloc:
netloc = netloc.split("@", 1)[1]
# keep optional :port
return netloc
# scp-like: git@host:owner/repo.git
if "@" in url and ":" in url:
after_at = url.split("@", 1)[1]
host = after_at.split(":", 1)[0]
return host.strip()
return url.split("/", 1)[0].strip()
def _ensure_remote_repository(
repo: Repository,
repositories_base_dir: str,
all_repos: List[Repository],
preview: bool,
) -> None:
"""
Ensure that the remote repository exists using provider APIs.
This is ONLY called when ensure_remote=True.
"""
ctx = build_context(repo, repositories_base_dir, all_repos)
resolved_mirrors = ctx.resolved_mirrors
primary_url = determine_primary_remote_url(repo, resolved_mirrors)
if not primary_url:
print("[INFO] No remote URL could be derived; skipping remote provisioning.")
return
# IMPORTANT:
# - repo["provider"] is typically a provider *kind* (e.g. "github" / "gitea"),
# NOT a hostname. We derive the actual host from the remote URL.
host = _host_from_git_url(primary_url)
owner = repo.get("account")
name = repo.get("repository")
if not host or not owner or not name:
print("[WARN] Missing host/account/repository; cannot ensure remote repo.")
print(f" host={host!r}, account={owner!r}, repository={name!r}")
return
print("------------------------------------------------------------")
print(f"[REMOTE ENSURE] {ctx.identifier}")
print(f"[REMOTE ENSURE] host: {host}")
print("------------------------------------------------------------")
spec = RepoSpec(
host=str(host),
owner=str(owner),
name=str(name),
private=bool(repo.get("private", True)),
description=str(repo.get("description", "")),
)
provider_kind = str(repo.get("provider", "")).strip().lower() or None
try:
result = ensure_remote_repo(
spec,
provider_hint=ProviderHint(kind=provider_kind),
options=EnsureOptions(
preview=preview,
interactive=True,
allow_prompt=True,
save_prompt_token_to_keyring=True,
),
)
print(f"[REMOTE ENSURE] {result.status.upper()}: {result.message}")
if result.url:
print(f"[REMOTE ENSURE] URL: {result.url}")
except Exception as exc: # noqa: BLE001
# Keep action layer resilient
print(f"[ERROR] Remote provisioning failed: {exc}")
print()
def _setup_local_mirrors_for_repo(
repo: Repository,
repositories_base_dir: str,
@@ -16,7 +123,8 @@ def _setup_local_mirrors_for_repo(
preview: bool,
) -> None:
"""
Ensure local Git state is sane (currently: 'origin' remote).
Local setup:
- Ensure 'origin' remote exists and is sane
"""
ctx = build_context(repo, repositories_base_dir, all_repos)
@@ -29,103 +137,68 @@ def _setup_local_mirrors_for_repo(
print()
def _probe_mirror(url: str, repo_dir: str) -> Tuple[bool, str]:
"""
Probe a remote mirror by running `git ls-remote <url>`.
Returns:
(True, "") on success,
(False, error_message) on failure.
Wichtig:
- Wir werten ausschließlich den Exit-Code aus.
- STDERR kann Hinweise/Warnings enthalten und ist NICHT automatisch ein Fehler.
"""
try:
# Wir ignorieren stdout komplett; wichtig ist nur, dass der Befehl ohne
# GitError (also Exit-Code 0) durchläuft.
run_git(["ls-remote", url], cwd=repo_dir)
return True, ""
except GitError as exc:
return False, str(exc)
def _setup_remote_mirrors_for_repo(
repo: Repository,
repositories_base_dir: str,
all_repos: List[Repository],
preview: bool,
ensure_remote: bool,
) -> None:
"""
Remote-side setup / validation.
Aktuell werden nur **nicht-destruktive Checks** gemacht:
Default behavior:
- Non-destructive checks using `git ls-remote`.
- Für jeden Mirror (aus config + MIRRORS-Datei, file gewinnt):
* `git ls-remote <url>` wird ausgeführt.
* Bei Exit-Code 0 → [OK]
* Bei Fehler → [WARN] + Details aus der GitError-Exception
Es werden **keine** Provider-APIs aufgerufen und keine Repos angelegt.
Optional behavior:
- If ensure_remote=True:
* Attempt to create missing repositories via provider API
* Uses TokenResolver (ENV -> keyring -> prompt)
"""
ctx = build_context(repo, repositories_base_dir, all_repos)
resolved_m = ctx.resolved_mirrors
resolved_mirrors = ctx.resolved_mirrors
print("------------------------------------------------------------")
print(f"[MIRROR SETUP:REMOTE] {ctx.identifier}")
print(f"[MIRROR SETUP:REMOTE] dir: {ctx.repo_dir}")
print("------------------------------------------------------------")
if not resolved_m:
# Optional: Fallback auf eine heuristisch bestimmte URL, falls wir
# irgendwann "automatisch anlegen" implementieren wollen.
primary_url = determine_primary_remote_url(repo, resolved_m)
if ensure_remote:
_ensure_remote_repository(
repo,
repositories_base_dir=repositories_base_dir,
all_repos=all_repos,
preview=preview,
)
if not resolved_mirrors:
primary_url = determine_primary_remote_url(repo, resolved_mirrors)
if not primary_url:
print(
"[INFO] No mirrors configured (config or MIRRORS file), and no "
"primary URL could be derived from provider/account/repository."
)
print("[INFO] No mirrors configured and no primary URL available.")
print()
return
ok, error_message = _probe_mirror(primary_url, ctx.repo_dir)
if ok:
print(f"[OK] Remote mirror (primary) is reachable: {primary_url}")
print(f"[OK] primary: {primary_url}")
else:
print("[WARN] Primary remote URL is NOT reachable:")
print(f" {primary_url}")
if error_message:
print(" Details:")
for line in error_message.splitlines():
print(f" {line}")
print(f"[WARN] primary: {primary_url}")
for line in error_message.splitlines():
print(f" {line}")
print()
print(
"[INFO] Remote checks are non-destructive and only use `git ls-remote` "
"to probe mirror URLs."
)
print()
return
# Normaler Fall: wir haben benannte Mirrors aus config/MIRRORS
for name, url in sorted(resolved_m.items()):
for name, url in sorted(resolved_mirrors.items()):
ok, error_message = _probe_mirror(url, ctx.repo_dir)
if ok:
print(f"[OK] Remote mirror '{name}' is reachable: {url}")
print(f"[OK] {name}: {url}")
else:
print(f"[WARN] Remote mirror '{name}' is NOT reachable:")
print(f" {url}")
if error_message:
print(" Details:")
for line in error_message.splitlines():
print(f" {line}")
print(f"[WARN] {name}: {url}")
for line in error_message.splitlines():
print(f" {line}")
print()
print(
"[INFO] Remote checks are non-destructive and only use `git ls-remote` "
"to probe mirror URLs."
)
print()
def setup_mirrors(
@@ -135,22 +208,25 @@ def setup_mirrors(
preview: bool = False,
local: bool = True,
remote: bool = True,
ensure_remote: bool = False,
) -> None:
"""
Setup mirrors for the selected repositories.
local:
- Configure local Git remotes (currently: ensure 'origin' is present and
points to a reasonable URL).
- Configure local Git remotes (ensure 'origin' exists).
remote:
- Non-destructive remote checks using `git ls-remote` for each mirror URL.
Es werden keine Repositories auf dem Provider angelegt.
- Non-destructive remote checks using `git ls-remote`.
ensure_remote:
- If True, attempt to create missing remote repositories via provider APIs.
- This is explicit and NEVER enabled implicitly.
"""
for repo in selected_repos:
if local:
_setup_local_mirrors_for_repo(
repo,
repo=repo,
repositories_base_dir=repositories_base_dir,
all_repos=all_repos,
preview=preview,
@@ -158,8 +234,9 @@ def setup_mirrors(
if remote:
_setup_remote_mirrors_for_repo(
repo,
repo=repo,
repositories_base_dir=repositories_base_dir,
all_repos=all_repos,
preview=preview,
ensure_remote=ensure_remote,
)

View File

@@ -289,7 +289,7 @@ def update_spec_version(
if preview:
print(
f"[PREVIEW] Would update spec file "
"[PREVIEW] Would update spec file "
f"{os.path.basename(spec_path)} to Version: {new_version}, Release: 1..."
)
return

View File

@@ -1,5 +1,5 @@
# src/pkgmgr/actions/release/workflow.py
from __future__ import annotations
from typing import Optional
import os
import sys
@@ -7,6 +7,7 @@ from typing import Optional
from pkgmgr.actions.branch import close_branch
from pkgmgr.core.git import get_current_branch, GitError
from pkgmgr.core.repository.paths import resolve_repo_paths
from .files import (
update_changelog,
@@ -57,8 +58,12 @@ def _release_impl(
print(f"New version: {new_ver_str} ({release_type})")
repo_root = os.path.dirname(os.path.abspath(pyproject_path))
paths = resolve_repo_paths(repo_root)
# --- Update versioned files ------------------------------------------------
update_pyproject_version(pyproject_path, new_ver_str, preview=preview)
changelog_message = update_changelog(
changelog_path,
new_ver_str,
@@ -66,38 +71,46 @@ def _release_impl(
preview=preview,
)
flake_path = os.path.join(repo_root, "flake.nix")
update_flake_version(flake_path, new_ver_str, preview=preview)
update_flake_version(paths.flake_nix, new_ver_str, preview=preview)
pkgbuild_path = os.path.join(repo_root, "PKGBUILD")
update_pkgbuild_version(pkgbuild_path, new_ver_str, preview=preview)
if paths.arch_pkgbuild:
update_pkgbuild_version(paths.arch_pkgbuild, new_ver_str, preview=preview)
else:
print("[INFO] No PKGBUILD found (packaging/arch/PKGBUILD or PKGBUILD). Skipping.")
spec_path = os.path.join(repo_root, "package-manager.spec")
update_spec_version(spec_path, new_ver_str, preview=preview)
if paths.rpm_spec:
update_spec_version(paths.rpm_spec, new_ver_str, preview=preview)
else:
print("[INFO] No RPM spec file found. Skipping spec version update.")
effective_message: Optional[str] = message
if effective_message is None and isinstance(changelog_message, str):
if changelog_message.strip():
effective_message = changelog_message.strip()
debian_changelog_path = os.path.join(repo_root, "debian", "changelog")
package_name = os.path.basename(repo_root) or "package-manager"
update_debian_changelog(
debian_changelog_path,
package_name=package_name,
new_version=new_ver_str,
message=effective_message,
preview=preview,
)
if paths.debian_changelog:
update_debian_changelog(
paths.debian_changelog,
package_name=package_name,
new_version=new_ver_str,
message=effective_message,
preview=preview,
)
else:
print("[INFO] No debian changelog found. Skipping debian/changelog update.")
update_spec_changelog(
spec_path=spec_path,
package_name=package_name,
new_version=new_ver_str,
message=effective_message,
preview=preview,
)
if paths.rpm_spec:
update_spec_changelog(
spec_path=paths.rpm_spec,
package_name=package_name,
new_version=new_ver_str,
message=effective_message,
preview=preview,
)
# --- Git commit / tag / push ----------------------------------------------
commit_msg = f"Release version {new_ver_str}"
tag_msg = effective_message or commit_msg
@@ -105,12 +118,12 @@ def _release_impl(
files_to_add = [
pyproject_path,
changelog_path,
flake_path,
pkgbuild_path,
spec_path,
debian_changelog_path,
paths.flake_nix,
paths.arch_pkgbuild,
paths.rpm_spec,
paths.debian_changelog,
]
existing_files = [p for p in files_to_add if p and os.path.exists(p)]
existing_files = [p for p in files_to_add if isinstance(p, str) and p and os.path.exists(p)]
if preview:
for path in existing_files:

View File

@@ -1,6 +1,5 @@
import os
import subprocess
import sys
import yaml
from pkgmgr.core.command.alias import generate_alias
from pkgmgr.core.config.save import save_user_config

View File

@@ -1,15 +1,32 @@
import os
import sys
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.repository.dir import get_repo_dir
def deinstall_repos(selected_repos, repositories_base_dir, bin_dir, all_repos, preview=False):
from pkgmgr.core.command.run import run_command
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.repository.identifier import get_repo_identifier
def deinstall_repos(
selected_repos,
repositories_base_dir,
bin_dir,
all_repos,
preview: bool = False,
) -> None:
for repo in selected_repos:
repo_identifier = get_repo_identifier(repo, all_repos)
alias_path = os.path.join(bin_dir, repo_identifier)
# Resolve repository directory
repo_dir = get_repo_dir(repositories_base_dir, repo)
# Prefer alias if available; fall back to identifier
alias_name = str(repo.get("alias") or repo_identifier)
alias_path = os.path.join(os.path.expanduser(bin_dir), alias_name)
# Remove alias link/file (interactive)
if os.path.exists(alias_path):
confirm = input(f"Are you sure you want to delete link '{alias_path}' for {repo_identifier}? [y/N]: ").strip().lower()
confirm = input(
f"Are you sure you want to delete link '{alias_path}' for {repo_identifier}? [y/N]: "
).strip().lower()
if confirm == "y":
if preview:
print(f"[Preview] Would remove link '{alias_path}'.")
@@ -19,10 +36,13 @@ def deinstall_repos(selected_repos, repositories_base_dir, bin_dir, all_repos, p
else:
print(f"No link found for {repo_identifier} in {bin_dir}.")
# Run make deinstall if repository exists and has a Makefile
makefile_path = os.path.join(repo_dir, "Makefile")
if os.path.exists(makefile_path):
print(f"Makefile found in {repo_identifier}, running 'make deinstall'...")
try:
run_command("make deinstall", cwd=repo_dir, preview=preview)
except SystemExit as e:
print(f"[Warning] Failed to run 'make deinstall' for {repo_identifier}: {e}")
print(
f"[Warning] Failed to run 'make deinstall' for {repo_identifier}: {e}"
)

View File

@@ -272,7 +272,7 @@ def list_repositories(
f"{'STATUS'.ljust(status_width)} "
f"{'CATEGORIES'.ljust(cat_width)} "
f"{'TAGS'.ljust(tag_width)} "
f"DIR"
"DIR"
f"{RESET}"
)
print(header)

View File

@@ -1,9 +1,12 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
import subprocess
import sys
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.repository.verify import verify_repository
@@ -17,13 +20,6 @@ def pull_with_verification(
) -> None:
"""
Execute `git pull` for each repository with verification.
- Uses verify_repository() in "pull" mode.
- If verification fails (and verification info is set) and
--no-verification is not enabled, the user is prompted to confirm
the pull.
- In preview mode, no interactive prompts are performed and no
Git commands are executed; only the would-be command is printed.
"""
for repo in selected_repos:
repo_identifier = get_repo_identifier(repo, all_repos)
@@ -34,18 +30,13 @@ def pull_with_verification(
continue
verified_info = repo.get("verified")
verified_ok, errors, commit_hash, signing_key = verify_repository(
verified_ok, errors, _commit_hash, _signing_key = verify_repository(
repo,
repo_dir,
mode="pull",
no_verification=no_verification,
)
# Only prompt the user if:
# - we are NOT in preview mode
# - verification is enabled
# - the repo has verification info configured
# - verification failed
if (
not preview
and not no_verification
@@ -59,16 +50,14 @@ def pull_with_verification(
if choice != "y":
continue
# Build the git pull command (include extra args if present)
args_part = " ".join(extra_args) if extra_args else ""
full_cmd = f"git pull{(' ' + args_part) if args_part else ''}"
if preview:
# Preview mode: only show the command, do not execute or prompt.
print(f"[Preview] In '{repo_dir}': {full_cmd}")
else:
print(f"Running in '{repo_dir}': {full_cmd}")
result = subprocess.run(full_cmd, cwd=repo_dir, shell=True)
result = subprocess.run(full_cmd, cwd=repo_dir, shell=True, check=False)
if result.returncode != 0:
print(
f"'git pull' for {repo_identifier} failed "

View File

@@ -1,4 +1,3 @@
import sys
import shutil
from pkgmgr.actions.proxy import exec_proxy_command

View File

@@ -1,68 +0,0 @@
import sys
import shutil
from pkgmgr.actions.repository.pull import pull_with_verification
from pkgmgr.actions.install import install_repos
def update_repos(
selected_repos,
repositories_base_dir,
bin_dir,
all_repos,
no_verification,
system_update,
preview: bool,
quiet: bool,
update_dependencies: bool,
clone_mode: str,
):
"""
Update repositories by pulling latest changes and installing them.
Parameters:
- selected_repos: List of selected repositories.
- repositories_base_dir: Base directory for repositories.
- bin_dir: Directory for symbolic links.
- all_repos: All repository configurations.
- no_verification: Whether to skip verification.
- system_update: Whether to run system update.
- preview: If True, only show commands without executing.
- quiet: If True, suppress messages.
- update_dependencies: Whether to update dependent repositories.
- clone_mode: Method to clone repositories (ssh or https).
"""
pull_with_verification(
selected_repos,
repositories_base_dir,
all_repos,
[],
no_verification,
preview,
)
install_repos(
selected_repos,
repositories_base_dir,
bin_dir,
all_repos,
no_verification,
preview,
quiet,
clone_mode,
update_dependencies,
)
if system_update:
from pkgmgr.core.command.run import run_command
# Nix: upgrade all profile entries (if Nix is available)
if shutil.which("nix") is not None:
try:
run_command("nix profile upgrade '.*'", preview=preview)
except SystemExit as e:
print(f"[Warning] 'nix profile upgrade' failed: {e}")
# Arch / AUR system update
run_command("sudo -u aur_builder yay -Syu --noconfirm", preview=preview)
run_command("sudo pacman -Syyu --noconfirm", preview=preview)

View File

@@ -0,0 +1,10 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
from pkgmgr.actions.update.manager import UpdateManager
__all__ = [
"UpdateManager",
]

View File

@@ -0,0 +1,61 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
from typing import Any, Iterable
from pkgmgr.actions.update.system_updater import SystemUpdater
class UpdateManager:
"""
Orchestrates:
- repository pull + installation
- optional system update
"""
def __init__(self) -> None:
self._system_updater = SystemUpdater()
def run(
self,
selected_repos: Iterable[Any],
repositories_base_dir: str,
bin_dir: str,
all_repos: Any,
no_verification: bool,
system_update: bool,
preview: bool,
quiet: bool,
update_dependencies: bool,
clone_mode: str,
force_update: bool = True,
) -> None:
from pkgmgr.actions.install import install_repos
from pkgmgr.actions.repository.pull import pull_with_verification
pull_with_verification(
selected_repos,
repositories_base_dir,
all_repos,
[],
no_verification,
preview,
)
install_repos(
selected_repos,
repositories_base_dir,
bin_dir,
all_repos,
no_verification,
preview,
quiet,
clone_mode,
update_dependencies,
force_update=force_update,
)
if system_update:
self._system_updater.run(preview=preview)

View File

@@ -0,0 +1,66 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import os
from dataclasses import dataclass
from typing import Dict
def read_os_release(path: str = "/etc/os-release") -> Dict[str, str]:
"""
Parse /etc/os-release into a dict. Returns empty dict if missing.
"""
if not os.path.exists(path):
return {}
result: Dict[str, str] = {}
with open(path, "r", encoding="utf-8") as f:
for line in f:
line = line.strip()
if not line or line.startswith("#") or "=" not in line:
continue
key, value = line.split("=", 1)
result[key.strip()] = value.strip().strip('"')
return result
@dataclass(frozen=True)
class OSReleaseInfo:
"""
Minimal /etc/os-release representation for distro detection.
"""
id: str = ""
id_like: str = ""
pretty_name: str = ""
@staticmethod
def load() -> "OSReleaseInfo":
data = read_os_release()
return OSReleaseInfo(
id=(data.get("ID") or "").lower(),
id_like=(data.get("ID_LIKE") or "").lower(),
pretty_name=(data.get("PRETTY_NAME") or ""),
)
def ids(self) -> set[str]:
ids: set[str] = set()
if self.id:
ids.add(self.id)
if self.id_like:
for part in self.id_like.split():
ids.add(part.strip())
return ids
def is_arch_family(self) -> bool:
ids = self.ids()
return ("arch" in ids) or ("archlinux" in ids)
def is_debian_family(self) -> bool:
ids = self.ids()
return bool(ids.intersection({"debian", "ubuntu"}))
def is_fedora_family(self) -> bool:
ids = self.ids()
return bool(ids.intersection({"fedora", "rhel", "centos", "rocky", "almalinux"}))

View File

@@ -0,0 +1,96 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import platform
import shutil
from pkgmgr.actions.update.os_release import OSReleaseInfo
class SystemUpdater:
"""
Executes distro-specific system update commands, plus Nix profile upgrades if available.
"""
def run(self, *, preview: bool) -> None:
from pkgmgr.core.command.run import run_command
# Distro-agnostic: Nix profile upgrades (if Nix is present).
if shutil.which("nix") is not None:
try:
run_command("nix profile upgrade '.*'", preview=preview)
except SystemExit as e:
print(f"[Warning] 'nix profile upgrade' failed: {e}")
osr = OSReleaseInfo.load()
if osr.is_arch_family():
self._update_arch(preview=preview)
return
if osr.is_debian_family():
self._update_debian(preview=preview)
return
if osr.is_fedora_family():
self._update_fedora(preview=preview)
return
distro = osr.pretty_name or platform.platform()
print(f"[Warning] Unsupported distribution for system update: {distro}")
def _update_arch(self, *, preview: bool) -> None:
from pkgmgr.core.command.run import run_command
yay = shutil.which("yay")
pacman = shutil.which("pacman")
sudo = shutil.which("sudo")
# Prefer yay if available (repo + AUR in one pass).
# Avoid running yay and pacman afterwards to prevent double update passes.
if yay and sudo:
run_command("sudo -u aur_builder yay -Syu --noconfirm", preview=preview)
return
if pacman and sudo:
run_command("sudo pacman -Syu --noconfirm", preview=preview)
return
print("[Warning] Cannot update Arch system: missing required tools (sudo/yay/pacman).")
def _update_debian(self, *, preview: bool) -> None:
from pkgmgr.core.command.run import run_command
sudo = shutil.which("sudo")
apt_get = shutil.which("apt-get")
if not (sudo and apt_get):
print("[Warning] Cannot update Debian/Ubuntu system: missing required tools (sudo/apt-get).")
return
env = "DEBIAN_FRONTEND=noninteractive"
run_command(f"sudo {env} apt-get update -y", preview=preview)
run_command(f"sudo {env} apt-get -y dist-upgrade", preview=preview)
def _update_fedora(self, *, preview: bool) -> None:
from pkgmgr.core.command.run import run_command
sudo = shutil.which("sudo")
dnf = shutil.which("dnf")
microdnf = shutil.which("microdnf")
if not sudo:
print("[Warning] Cannot update Fedora/RHEL-like system: missing sudo.")
return
if dnf:
run_command("sudo dnf -y upgrade", preview=preview)
return
if microdnf:
run_command("sudo microdnf -y upgrade", preview=preview)
return
print("[Warning] Cannot update Fedora/RHEL-like system: missing dnf/microdnf.")

View File

@@ -1,5 +1,4 @@
from __future__ import annotations
from typing import Optional
import os
import sys
@@ -9,7 +8,7 @@ from pkgmgr.cli.context import CLIContext
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.git import get_tags
from pkgmgr.core.version.semver import SemVer, extract_semver_from_tags
from pkgmgr.core.version.semver import extract_semver_from_tags
from pkgmgr.actions.changelog import generate_changelog

View File

@@ -1,32 +1,30 @@
# src/pkgmgr/cli/commands/mirror.py
from __future__ import annotations
import sys
from typing import Any, Dict, List
from pkgmgr.actions.mirror import (
diff_mirrors,
list_mirrors,
merge_mirrors,
setup_mirrors,
)
from pkgmgr.actions.mirror import diff_mirrors, list_mirrors, merge_mirrors, setup_mirrors
from pkgmgr.cli.context import CLIContext
Repository = Dict[str, Any]
def handle_mirror_command(
args,
ctx: CLIContext,
args: Any,
selected: List[Repository],
) -> None:
"""
Entry point for 'pkgmgr mirror' subcommands.
Subcommands:
- mirror list → list configured mirrors
- mirror diff → compare config vs MIRRORS file
- mirror merge → merge mirrors between config and MIRRORS file
- mirror setup → configure local Git + remote placeholders
- mirror list
- mirror diff
- mirror merge
- mirror setup
- mirror check
- mirror provision
"""
if not selected:
print("[INFO] No repositories selected for 'mirror' command.")
@@ -34,9 +32,6 @@ def handle_mirror_command(
subcommand = getattr(args, "subcommand", None)
# ------------------------------------------------------------
# mirror list
# ------------------------------------------------------------
if subcommand == "list":
source = getattr(args, "source", "all")
list_mirrors(
@@ -47,9 +42,6 @@ def handle_mirror_command(
)
return
# ------------------------------------------------------------
# mirror diff
# ------------------------------------------------------------
if subcommand == "diff":
diff_mirrors(
selected_repos=selected,
@@ -58,27 +50,17 @@ def handle_mirror_command(
)
return
# ------------------------------------------------------------
# mirror merge
# ------------------------------------------------------------
if subcommand == "merge":
source = getattr(args, "source", None)
target = getattr(args, "target", None)
preview = getattr(args, "preview", False)
if source == target:
print(
"[ERROR] For 'mirror merge', source and target "
"must differ (one of: config, file)."
)
print("[ERROR] For 'mirror merge', source and target must differ (config vs file).")
sys.exit(2)
# Config file path can be passed explicitly via --config-path.
# If not given, fall back to the global context (if available).
explicit_config_path = getattr(args, "config_path", None)
user_config_path = explicit_config_path or getattr(
ctx, "user_config_path", None
)
user_config_path = explicit_config_path or getattr(ctx, "user_config_path", None)
merge_mirrors(
selected_repos=selected,
@@ -91,26 +73,42 @@ def handle_mirror_command(
)
return
# ------------------------------------------------------------
# mirror setup
# ------------------------------------------------------------
if subcommand == "setup":
local = getattr(args, "local", False)
remote = getattr(args, "remote", False)
preview = getattr(args, "preview", False)
# If neither flag is set → default to both.
if not local and not remote:
local = True
remote = True
setup_mirrors(
selected_repos=selected,
repositories_base_dir=ctx.repositories_base_dir,
all_repos=ctx.all_repositories,
preview=preview,
local=local,
remote=remote,
local=True,
remote=False,
ensure_remote=False,
)
return
if subcommand == "check":
preview = getattr(args, "preview", False)
setup_mirrors(
selected_repos=selected,
repositories_base_dir=ctx.repositories_base_dir,
all_repos=ctx.all_repositories,
preview=preview,
local=False,
remote=True,
ensure_remote=False,
)
return
if subcommand == "provision":
preview = getattr(args, "preview", False)
setup_mirrors(
selected_repos=selected,
repositories_base_dir=ctx.repositories_base_dir,
all_repos=ctx.all_repositories,
preview=preview,
local=False,
remote=True,
ensure_remote=True,
)
return

View File

@@ -10,12 +10,10 @@ from pkgmgr.cli.context import CLIContext
from pkgmgr.actions.install import install_repos
from pkgmgr.actions.repository.deinstall import deinstall_repos
from pkgmgr.actions.repository.delete import delete_repos
from pkgmgr.actions.repository.update import update_repos
from pkgmgr.actions.repository.status import status_repos
from pkgmgr.actions.repository.list import list_repositories
from pkgmgr.core.command.run import run_command
from pkgmgr.actions.repository.create import create_repo
from pkgmgr.core.repository.selected import get_selected_repos
from pkgmgr.core.command.run import run_command
from pkgmgr.core.repository.dir import get_repo_dir
Repository = Dict[str, Any]
@@ -52,7 +50,7 @@ def handle_repos_command(
selected: List[Repository],
) -> None:
"""
Handle core repository commands (install/update/deinstall/delete/.../list).
Handle core repository commands (install/update/deinstall/delete/status/list/path/shell/create).
"""
# ------------------------------------------------------------
@@ -69,24 +67,7 @@ def handle_repos_command(
args.quiet,
args.clone_mode,
args.dependencies,
)
return
# ------------------------------------------------------------
# update
# ------------------------------------------------------------
if args.command == "update":
update_repos(
selected,
ctx.repositories_base_dir,
ctx.binaries_dir,
ctx.all_repositories,
args.no_verification,
args.system,
args.preview,
args.quiet,
args.dependencies,
args.clone_mode,
force_update=getattr(args, "update", False),
)
return
@@ -147,9 +128,7 @@ def handle_repos_command(
f"{repository.get('account', '?')}/"
f"{repository.get('repository', '?')}"
)
print(
f"[WARN] Could not resolve directory for {ident}: {exc}"
)
print(f"[WARN] Could not resolve directory for {ident}: {exc}")
continue
print(repo_dir)

View File

@@ -1,5 +1,4 @@
from __future__ import annotations
from typing import Optional
import os
import sys
@@ -10,8 +9,13 @@ from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.git import get_tags
from pkgmgr.core.version.semver import SemVer, find_latest_version
from pkgmgr.core.version.installed import (
get_installed_python_version,
get_installed_nix_profile_version,
)
from pkgmgr.core.version.source import (
read_pyproject_version,
read_pyproject_project_name,
read_flake_version,
read_pkgbuild_version,
read_debian_changelog_version,
@@ -19,10 +23,54 @@ from pkgmgr.core.version.source import (
read_ansible_galaxy_version,
)
Repository = Dict[str, Any]
def _print_pkgmgr_self_version() -> None:
"""
Print version information for pkgmgr itself (installed env + nix profile),
used when no repository is selected (e.g. user is not inside a repo).
"""
print("pkgmgr version info")
print("====================")
print("\nRepository: <pkgmgr self>")
print("----------------------------------------")
# Common distribution/module naming variants.
python_candidates = [
"package-manager", # PyPI dist name in your project
"package_manager", # module-ish variant
"pkgmgr", # console/alias-ish
]
nix_candidates = [
"pkgmgr",
"package-manager",
]
installed_python = get_installed_python_version(*python_candidates)
installed_nix = get_installed_nix_profile_version(*nix_candidates)
if installed_python:
print(
f"Installed (Python env): {installed_python.version} "
f"(dist: {installed_python.name})"
)
else:
print("Installed (Python env): <not installed>")
if installed_nix:
print(
f"Installed (Nix profile): {installed_nix.version} "
f"(match: {installed_nix.name})"
)
else:
print("Installed (Nix profile): <not installed>")
# Helpful context for debugging "why do versions differ?"
print(f"Python executable: {sys.executable}")
print(f"Python prefix: {sys.prefix}")
def handle_version(
args,
ctx: CLIContext,
@@ -31,20 +79,39 @@ def handle_version(
"""
Handle the 'version' command.
Shows version information from various sources (git tags, pyproject,
flake.nix, PKGBUILD, debian, spec, Ansible Galaxy).
"""
Shows version information from:
- Git tags
- packaging metadata
- installed Python environment
- installed Nix profile
repo_list = selected
if not repo_list:
print("No repositories selected for version.")
sys.exit(1)
Special case:
- If no repositories are selected (e.g. not in a repo and no identifiers),
print pkgmgr's own installed versions instead of exiting with an error.
"""
if not selected:
_print_pkgmgr_self_version()
return
print("pkgmgr version info")
print("====================")
for repo in repo_list:
# Resolve repository directory
for repo in selected:
identifier = get_repo_identifier(repo, ctx.all_repositories)
python_candidates: list[str] = []
nix_candidates: list[str] = [identifier]
for key in ("pypi", "pip", "python_package", "distribution", "package"):
val = repo.get(key)
if isinstance(val, str) and val.strip():
python_candidates.append(val.strip())
python_candidates.append(identifier)
installed_python = get_installed_python_version(*python_candidates)
installed_nix = get_installed_nix_profile_version(*nix_candidates)
repo_dir = repo.get("directory")
if not repo_dir:
try:
@@ -52,51 +119,79 @@ def handle_version(
except Exception:
repo_dir = None
# If no local clone exists, skip gracefully with info message
if not repo_dir or not os.path.isdir(repo_dir):
identifier = get_repo_identifier(repo, ctx.all_repositories)
print(f"\nRepository: {identifier}")
print("----------------------------------------")
print(
"[INFO] Skipped: repository directory does not exist "
"locally, version detection is not possible."
"[INFO] Skipped: repository directory does not exist locally, "
"version detection is not possible."
)
if installed_python:
print(
f"Installed (Python env): {installed_python.version} "
f"(dist: {installed_python.name})"
)
else:
print("Installed (Python env): <not installed>")
if installed_nix:
print(
f"Installed (Nix profile): {installed_nix.version} "
f"(match: {installed_nix.name})"
)
else:
print("Installed (Nix profile): <not installed>")
continue
print(f"\nRepository: {repo_dir}")
print("----------------------------------------")
# 1) Git tags (SemVer)
try:
tags = get_tags(cwd=repo_dir)
except Exception as exc:
print(f"[ERROR] Could not read git tags: {exc}")
tags = []
latest_tag_info: Optional[Tuple[str, SemVer]]
latest_tag_info = find_latest_version(tags) if tags else None
latest_tag_info: Optional[Tuple[str, SemVer]] = (
find_latest_version(tags) if tags else None
)
if latest_tag_info is None:
latest_tag_str = None
latest_ver = None
if latest_tag_info:
tag, ver = latest_tag_info
print(f"Git (latest SemVer tag): {tag} (parsed: {ver})")
else:
latest_tag_str, latest_ver = latest_tag_info
print("Git (latest SemVer tag): <none found>")
# 2) Packaging / metadata sources
pyproject_version = read_pyproject_version(repo_dir)
pyproject_name = read_pyproject_project_name(repo_dir)
flake_version = read_flake_version(repo_dir)
pkgbuild_version = read_pkgbuild_version(repo_dir)
debian_version = read_debian_changelog_version(repo_dir)
spec_version = read_spec_version(repo_dir)
ansible_version = read_ansible_galaxy_version(repo_dir)
# 3) Print version summary
if latest_ver is not None:
if pyproject_name:
installed_python = get_installed_python_version(
pyproject_name, *python_candidates
)
if installed_python:
print(
f"Git (latest SemVer tag): {latest_tag_str} (parsed: {latest_ver})"
f"Installed (Python env): {installed_python.version} "
f"(dist: {installed_python.name})"
)
else:
print("Git (latest SemVer tag): <none found>")
print("Installed (Python env): <not installed>")
if installed_nix:
print(
f"Installed (Nix profile): {installed_nix.version} "
f"(match: {installed_nix.name})"
)
else:
print("Installed (Nix profile): <not installed>")
print(f"pyproject.toml: {pyproject_version or '<not found>'}")
print(f"flake.nix: {flake_version or '<not found>'}")
@@ -105,15 +200,16 @@ def handle_version(
print(f"package-manager.spec: {spec_version or '<not found>'}")
print(f"Ansible Galaxy meta: {ansible_version or '<not found>'}")
# 4) Consistency hint (Git tag vs. pyproject)
if latest_ver is not None and pyproject_version is not None:
if latest_tag_info and pyproject_version:
try:
file_ver = SemVer.parse(pyproject_version)
if file_ver != latest_ver:
if file_ver != latest_tag_info[1]:
print(
f"[WARN] Version mismatch: Git={latest_ver}, pyproject={file_ver}"
f"[WARN] Version mismatch: "
f"Git={latest_tag_info[1]}, pyproject={file_ver}"
)
except ValueError:
print(
f"[WARN] pyproject version {pyproject_version!r} is not valid SemVer."
f"[WARN] pyproject version {pyproject_version!r} "
f"is not valid SemVer."
)

View File

@@ -129,7 +129,6 @@ def dispatch_command(args, ctx: CLIContext) -> None:
# ------------------------------------------------------------------ #
if args.command in (
"install",
"update",
"deinstall",
"delete",
"status",
@@ -141,6 +140,27 @@ def dispatch_command(args, ctx: CLIContext) -> None:
handle_repos_command(args, ctx, selected)
return
# ------------------------------------------------------------
# update
# ------------------------------------------------------------
if args.command == "update":
from pkgmgr.actions.update import UpdateManager
UpdateManager().run(
selected_repos=selected,
repositories_base_dir=ctx.repositories_base_dir,
bin_dir=ctx.binaries_dir,
all_repos=ctx.all_repositories,
no_verification=args.no_verification,
system_update=args.system,
preview=args.preview,
quiet=args.quiet,
update_dependencies=args.dependencies,
clone_mode=args.clone_mode,
force_update=True,
)
return
# ------------------------------------------------------------------ #
# Tools (explore / terminal / code)
# ------------------------------------------------------------------ #
@@ -176,7 +196,7 @@ def dispatch_command(args, ctx: CLIContext) -> None:
return
if args.command == "mirror":
handle_mirror_command(args, ctx, selected)
handle_mirror_command(ctx, args, selected)
return
print(f"Unknown command: {args.command}")

View File

@@ -1,96 +1,134 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# src/pkgmgr/cli/parser/common.py
from __future__ import annotations
import argparse
from typing import Optional, Tuple
class SortedSubParsersAction(argparse._SubParsersAction):
"""
Subparsers action that keeps choices sorted alphabetically.
Subparsers action that keeps subcommands sorted alphabetically.
"""
def add_parser(self, name, **kwargs):
parser = super().add_parser(name, **kwargs)
# Sort choices alphabetically by dest (subcommand name)
self._choices_actions.sort(key=lambda a: a.dest)
return parser
def _has_action(
parser: argparse.ArgumentParser,
*,
positional: Optional[str] = None,
options: Tuple[str, ...] = (),
) -> bool:
"""
Check whether the parser already has an action.
- positional: name of a positional argument (e.g. "identifiers")
- options: option strings (e.g. "--preview", "-q")
"""
for action in parser._actions:
if positional and action.dest == positional:
return True
if options and any(opt in action.option_strings for opt in options):
return True
return False
def _add_positional_if_missing(
parser: argparse.ArgumentParser,
name: str,
**kwargs,
) -> None:
"""Safely add a positional argument."""
if _has_action(parser, positional=name):
return
parser.add_argument(name, **kwargs)
def _add_option_if_missing(
parser: argparse.ArgumentParser,
*option_strings: str,
**kwargs,
) -> None:
"""Safely add an optional argument."""
if _has_action(parser, options=tuple(option_strings)):
return
parser.add_argument(*option_strings, **kwargs)
def add_identifier_arguments(subparser: argparse.ArgumentParser) -> None:
"""
Common identifier / selection arguments for many subcommands.
Selection modes (mutual intent, not hard-enforced):
- identifiers (positional): select by alias / provider/account/repo
- --all: select all repositories
- --category / --string / --tag: filter-based selection on top
of the full repository set
"""
subparser.add_argument(
_add_positional_if_missing(
subparser,
"identifiers",
nargs="*",
help=(
"Identifier(s) for repositories. "
"Default: Repository of current folder."
"Default: repository of the current working directory."
),
)
subparser.add_argument(
_add_option_if_missing(
subparser,
"--all",
action="store_true",
default=False,
help=(
"Apply the subcommand to all repositories in the config. "
"Some subcommands ask for confirmation. If you want to give this "
"confirmation for all repositories, pipe 'yes'. E.g: "
"yes | pkgmgr {subcommand} --all"
"Pipe 'yes' to auto-confirm. Example:\n"
" yes | pkgmgr <command> --all"
),
)
subparser.add_argument(
_add_option_if_missing(
subparser,
"--category",
nargs="+",
default=[],
help=(
"Filter repositories by category patterns derived from config "
"filenames or repo metadata (use filename without .yml/.yaml, "
"or /regex/ to use a regular expression)."
),
help="Filter repositories by category (supports /regex/).",
)
subparser.add_argument(
_add_option_if_missing(
subparser,
"--string",
default="",
help=(
"Filter repositories whose identifier / name / path contains this "
"substring (case-insensitive). Use /regex/ for regular expressions."
),
help="Filter repositories by substring or /regex/.",
)
subparser.add_argument(
_add_option_if_missing(
subparser,
"--tag",
action="append",
default=[],
help=(
"Filter repositories by tag. Matches tags from the repository "
"collector and category tags. Use /regex/ for regular expressions."
),
help="Filter repositories by tag (supports /regex/).",
)
subparser.add_argument(
_add_option_if_missing(
subparser,
"--preview",
action="store_true",
help="Preview changes without executing commands",
help="Preview changes without executing commands.",
)
subparser.add_argument(
_add_option_if_missing(
subparser,
"--list",
action="store_true",
help="List affected repositories (with preview or status)",
help="List affected repositories.",
)
subparser.add_argument(
_add_option_if_missing(
subparser,
"-a",
"--args",
nargs=argparse.REMAINDER,
dest="extra_args",
help="Additional parameters to be attached.",
nargs=argparse.REMAINDER,
default=[],
help="Additional parameters to be attached.",
)
@@ -99,29 +137,34 @@ def add_install_update_arguments(subparser: argparse.ArgumentParser) -> None:
Common arguments for install/update commands.
"""
add_identifier_arguments(subparser)
subparser.add_argument(
_add_option_if_missing(
subparser,
"-q",
"--quiet",
action="store_true",
help="Suppress warnings and info messages",
help="Suppress warnings and info messages.",
)
subparser.add_argument(
_add_option_if_missing(
subparser,
"--no-verification",
action="store_true",
default=False,
help="Disable verification via commit/gpg",
help="Disable verification via commit / GPG.",
)
subparser.add_argument(
_add_option_if_missing(
subparser,
"--dependencies",
action="store_true",
help="Also pull and update dependencies",
help="Also pull and update dependencies.",
)
subparser.add_argument(
_add_option_if_missing(
subparser,
"--clone-mode",
choices=["ssh", "https", "shallow"],
default="ssh",
help=(
"Specify the clone mode: ssh, https, or shallow "
"(HTTPS shallow clone; default: ssh)"
),
help="Specify clone mode (default: ssh).",
)

View File

@@ -1,11 +1,12 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import argparse
from .common import add_install_update_arguments, add_identifier_arguments
from pkgmgr.cli.parser.common import (
add_install_update_arguments,
add_identifier_arguments,
)
def add_install_update_subparsers(
@@ -14,11 +15,17 @@ def add_install_update_subparsers(
"""
Register install / update / deinstall / delete commands.
"""
install_parser = subparsers.add_parser(
"install",
help="Setup repository/repositories alias links to executables",
)
add_install_update_arguments(install_parser)
install_parser.add_argument(
"--update",
action="store_true",
help="Force re-run installers (upgrade/refresh) even if the CLI layer is already loaded",
)
update_parser = subparsers.add_parser(
"update",
@@ -27,9 +34,11 @@ def add_install_update_subparsers(
add_install_update_arguments(update_parser)
update_parser.add_argument(
"--system",
dest="system",
action="store_true",
help="Include system update commands",
)
# No --update here: update implies force_update=True
deinstall_parser = subparsers.add_parser(
"deinstall",

View File

@@ -1,3 +1,4 @@
# src/pkgmgr/cli/parser/mirror_cmd.py
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
@@ -8,103 +9,55 @@ import argparse
from .common import add_identifier_arguments
def add_mirror_subparsers(
subparsers: argparse._SubParsersAction,
) -> None:
"""
Register mirror command and its subcommands (list, diff, merge, setup).
"""
def add_mirror_subparsers(subparsers: argparse._SubParsersAction) -> None:
mirror_parser = subparsers.add_parser(
"mirror",
help="Mirror-related utilities (list, diff, merge, setup)",
help="Mirror-related utilities (list, diff, merge, setup, check, provision)",
)
mirror_subparsers = mirror_parser.add_subparsers(
dest="subcommand",
help="Mirror subcommands",
metavar="SUBCOMMAND",
required=True,
)
# ------------------------------------------------------------------
# mirror list
# ------------------------------------------------------------------
mirror_list = mirror_subparsers.add_parser(
"list",
help="List configured mirrors for repositories",
)
mirror_list = mirror_subparsers.add_parser("list", help="List configured mirrors for repositories")
add_identifier_arguments(mirror_list)
mirror_list.add_argument(
"--source",
choices=["all", "config", "file", "resolved"],
choices=["config", "file", "all"],
default="all",
help="Which mirror source to show.",
)
# ------------------------------------------------------------------
# mirror diff
# ------------------------------------------------------------------
mirror_diff = mirror_subparsers.add_parser(
"diff",
help="Show differences between config mirrors and MIRRORS file",
)
mirror_diff = mirror_subparsers.add_parser("diff", help="Show differences between config mirrors and MIRRORS file")
add_identifier_arguments(mirror_diff)
# ------------------------------------------------------------------
# mirror merge {config,file} {config,file}
# ------------------------------------------------------------------
mirror_merge = mirror_subparsers.add_parser(
"merge",
help=(
"Merge mirrors between config and MIRRORS file "
"(example: pkgmgr mirror merge config file --all)"
),
help="Merge mirrors between config and MIRRORS file (example: pkgmgr mirror merge config file --all)",
)
# First define merge direction positionals, then selection args.
mirror_merge.add_argument(
"source",
choices=["config", "file"],
help="Source of mirrors.",
)
mirror_merge.add_argument(
"target",
choices=["config", "file"],
help="Target of mirrors.",
)
# Selection / filter / preview arguments
mirror_merge.add_argument("source", choices=["config", "file"], help="Source of mirrors.")
mirror_merge.add_argument("target", choices=["config", "file"], help="Target of mirrors.")
add_identifier_arguments(mirror_merge)
mirror_merge.add_argument(
"--config-path",
help=(
"Path to the user config file to update. "
"If omitted, the global config path is used."
),
help="Path to the user config file to update. If omitted, the global config path is used.",
)
# Note: --preview, --all, --category, --tag, --list, etc. are provided
# by add_identifier_arguments().
# ------------------------------------------------------------------
# mirror setup
# ------------------------------------------------------------------
mirror_setup = mirror_subparsers.add_parser(
"setup",
help=(
"Setup mirror configuration for repositories.\n"
" --local → configure local Git (remotes, pushurls)\n"
" --remote → create remote repositories if missing\n"
"Default: both local and remote."
),
help="Configure local Git remotes and push URLs (origin, pushurl list).",
)
add_identifier_arguments(mirror_setup)
mirror_setup.add_argument(
"--local",
action="store_true",
help="Only configure the local Git repository.",
mirror_check = mirror_subparsers.add_parser(
"check",
help="Check remote mirror reachability (git ls-remote). Read-only.",
)
mirror_setup.add_argument(
"--remote",
action="store_true",
help="Only operate on remote repositories.",
add_identifier_arguments(mirror_check)
mirror_provision = mirror_subparsers.add_parser(
"provision",
help="Provision remote repositories via provider APIs (create missing repos).",
)
# Note: --preview also comes from add_identifier_arguments().
add_identifier_arguments(mirror_provision)

View File

@@ -28,6 +28,7 @@ PROXY_COMMANDS: Dict[str, List[str]] = {
"reset",
"revert",
"rebase",
"status",
"commit",
],
"docker": [

View File

@@ -0,0 +1,30 @@
# src/pkgmgr/core/command/layer.py
from __future__ import annotations
from enum import Enum
class CliLayer(str, Enum):
"""
CLI layer precedence (lower number = stronger layer).
"""
OS_PACKAGES = "os-packages"
NIX = "nix"
PYTHON = "python"
MAKEFILE = "makefile"
_LAYER_PRIORITY: dict[CliLayer, int] = {
CliLayer.OS_PACKAGES: 0,
CliLayer.NIX: 1,
CliLayer.PYTHON: 2,
CliLayer.MAKEFILE: 3,
}
def layer_priority(layer: CliLayer) -> int:
"""
Return precedence priority for the given layer.
Lower value means higher priority (stronger layer).
"""
return _LAYER_PRIORITY.get(layer, 999)

View File

@@ -1,4 +1,3 @@
from typing import Optional
import os
import shutil
from typing import Optional, List, Dict, Any
@@ -201,8 +200,8 @@ def resolve_command_for_repo(
print(
f"[INFO] Repository '{repo_identifier}' appears to be a Python "
f"package at '{python_package_root}' but no CLI entry point was "
f"found (PATH, Nix, main.sh/main.py). Treating it as a "
f"library-only repository with no command."
"found (PATH, Nix, main.sh/main.py). Treating it as a "
"library-only repository with no command."
)
return None

View File

@@ -1,10 +1,10 @@
from typing import Optional
# pkgmgr/run_command.py
from __future__ import annotations
import selectors
import subprocess
import sys
from typing import List, Optional, Union
CommandType = Union[str, List[str]]
@@ -15,32 +15,97 @@ def run_command(
allow_failure: bool = False,
) -> subprocess.CompletedProcess:
"""
Run a command and optionally exit on error.
Run a command with live output while capturing stdout/stderr.
- If `cmd` is a string, it is executed with `shell=True`.
- If `cmd` is a list of strings, it is executed without a shell.
- Output is streamed live to the terminal.
- Output is captured in memory.
- On failure, captured stdout/stderr are printed again so errors are never lost.
- Command is executed exactly once.
"""
if isinstance(cmd, str):
display = cmd
else:
display = " ".join(cmd)
display = cmd if isinstance(cmd, str) else " ".join(cmd)
where = cwd or "."
if preview:
print(f"[Preview] In '{where}': {display}")
# Fake a successful result; most callers ignore the return value anyway
return subprocess.CompletedProcess(cmd, 0) # type: ignore[arg-type]
print(f"Running in '{where}': {display}")
if isinstance(cmd, str):
result = subprocess.run(cmd, cwd=cwd, shell=True)
else:
result = subprocess.run(cmd, cwd=cwd)
process = subprocess.Popen(
cmd,
cwd=cwd,
shell=isinstance(cmd, str),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
bufsize=1,
)
if result.returncode != 0 and not allow_failure:
print(f"Command failed with exit code {result.returncode}. Exiting.")
sys.exit(result.returncode)
assert process.stdout is not None
assert process.stderr is not None
return result
sel = selectors.DefaultSelector()
sel.register(process.stdout, selectors.EVENT_READ, data="stdout")
sel.register(process.stderr, selectors.EVENT_READ, data="stderr")
stdout_lines: List[str] = []
stderr_lines: List[str] = []
try:
while sel.get_map():
for key, _ in sel.select():
stream = key.fileobj
which = key.data
line = stream.readline()
if line == "":
# EOF: stop watching this stream
try:
sel.unregister(stream)
except Exception:
pass
continue
if which == "stdout":
stdout_lines.append(line)
print(line, end="")
else:
stderr_lines.append(line)
print(line, end="", file=sys.stderr)
finally:
# Ensure we don't leak FDs
try:
sel.close()
finally:
try:
process.stdout.close()
except Exception:
pass
try:
process.stderr.close()
except Exception:
pass
returncode = process.wait()
if returncode != 0 and not allow_failure:
print("\n[pkgmgr] Command failed, captured diagnostics:", file=sys.stderr)
print(f"[pkgmgr] Failed command: {display}", file=sys.stderr)
if stdout_lines:
print("----- stdout -----")
print("".join(stdout_lines), end="")
if stderr_lines:
print("----- stderr -----", file=sys.stderr)
print("".join(stderr_lines), end="", file=sys.stderr)
print(f"Command failed with exit code {returncode}. Exiting.")
sys.exit(returncode)
return subprocess.CompletedProcess(
cmd,
returncode,
stdout="".join(stdout_lines),
stderr="".join(stderr_lines),
)

View File

@@ -0,0 +1,21 @@
# src/pkgmgr/core/credentials/__init__.py
"""Credential resolution for provider APIs."""
from .resolver import ResolutionOptions, TokenResolver
from .types import (
CredentialError,
KeyringUnavailableError,
NoCredentialsError,
TokenRequest,
TokenResult,
)
__all__ = [
"TokenResolver",
"ResolutionOptions",
"CredentialError",
"NoCredentialsError",
"KeyringUnavailableError",
"TokenRequest",
"TokenResult",
]

View File

@@ -0,0 +1,11 @@
"""Credential providers used by TokenResolver."""
from .env import EnvTokenProvider
from .keyring import KeyringTokenProvider
from .prompt import PromptTokenProvider
__all__ = [
"EnvTokenProvider",
"KeyringTokenProvider",
"PromptTokenProvider",
]

View File

@@ -0,0 +1,23 @@
# src/pkgmgr/core/credentials/providers/env.py
from __future__ import annotations
import os
from dataclasses import dataclass
from typing import Optional
from ..store_keys import env_var_candidates
from ..types import TokenRequest, TokenResult
@dataclass(frozen=True)
class EnvTokenProvider:
"""Resolve tokens from environment variables."""
source_name: str = "env"
def get(self, request: TokenRequest) -> Optional[TokenResult]:
for key in env_var_candidates(request.provider_kind, request.host, request.owner):
val = os.environ.get(key)
if val:
return TokenResult(token=val.strip(), source=self.source_name)
return None

View File

@@ -0,0 +1,39 @@
# src/pkgmgr/core/credentials/providers/keyring.py
from __future__ import annotations
from dataclasses import dataclass
from typing import Optional
from ..store_keys import build_keyring_key
from ..types import KeyringUnavailableError, TokenRequest, TokenResult
def _import_keyring():
try:
import keyring # type: ignore
return keyring
except Exception as exc: # noqa: BLE001
raise KeyringUnavailableError(
"python-keyring is not available or no backend is configured."
) from exc
@dataclass(frozen=True)
class KeyringTokenProvider:
"""Resolve/store tokens from/to OS keyring via python-keyring."""
source_name: str = "keyring"
def get(self, request: TokenRequest) -> Optional[TokenResult]:
keyring = _import_keyring()
key = build_keyring_key(request.provider_kind, request.host, request.owner)
token = keyring.get_password(key.service, key.username)
if token:
return TokenResult(token=token.strip(), source=self.source_name)
return None
def set(self, request: TokenRequest, token: str) -> None:
keyring = _import_keyring()
key = build_keyring_key(request.provider_kind, request.host, request.owner)
keyring.set_password(key.service, key.username, token)

View File

@@ -0,0 +1,32 @@
# src/pkgmgr/core/credentials/providers/prompt.py
from __future__ import annotations
import sys
from dataclasses import dataclass
from getpass import getpass
from typing import Optional
from ..types import TokenRequest, TokenResult
@dataclass(frozen=True)
class PromptTokenProvider:
"""Interactively prompt for a token.
Only used when:
- interactive mode is enabled
- stdin is a TTY
"""
source_name: str = "prompt"
def get(self, request: TokenRequest) -> Optional[TokenResult]:
if not sys.stdin.isatty():
return None
owner_info = f" (owner: {request.owner})" if request.owner else ""
prompt = f"Enter API token for {request.provider_kind} on {request.host}{owner_info}: "
token = (getpass(prompt) or "").strip()
if not token:
return None
return TokenResult(token=token, source=self.source_name)

View File

@@ -0,0 +1,71 @@
# src/pkgmgr/core/credentials/resolver.py
from __future__ import annotations
from dataclasses import dataclass
from typing import Optional
from .providers.env import EnvTokenProvider
from .providers.keyring import KeyringTokenProvider
from .providers.prompt import PromptTokenProvider
from .types import NoCredentialsError, TokenRequest, TokenResult
@dataclass(frozen=True)
class ResolutionOptions:
"""Controls token resolution behavior."""
interactive: bool = True
allow_prompt: bool = True
save_prompt_token_to_keyring: bool = True
class TokenResolver:
"""Resolve tokens from multiple sources (ENV -> Keyring -> Prompt)."""
def __init__(self) -> None:
self._env = EnvTokenProvider()
self._keyring = KeyringTokenProvider()
self._prompt = PromptTokenProvider()
def get_token(
self,
provider_kind: str,
host: str,
owner: Optional[str] = None,
options: Optional[ResolutionOptions] = None,
) -> TokenResult:
opts = options or ResolutionOptions()
request = TokenRequest(provider_kind=provider_kind, host=host, owner=owner)
# 1) ENV
env_res = self._env.get(request)
if env_res:
return env_res
# 2) Keyring
try:
kr_res = self._keyring.get(request)
if kr_res:
return kr_res
except Exception:
# Keyring missing/unavailable: ignore to allow prompt (workstations)
# or to fail cleanly below (headless CI without prompt).
pass
# 3) Prompt (optional)
if opts.interactive and opts.allow_prompt:
prompt_res = self._prompt.get(request)
if prompt_res:
if opts.save_prompt_token_to_keyring:
try:
self._keyring.set(request, prompt_res.token)
except Exception:
# If keyring cannot store, still use token for this run.
pass
return prompt_res
raise NoCredentialsError(
f"No token available for {provider_kind}@{host}"
+ (f" (owner: {owner})" if owner else "")
+ ". Provide it via environment variable or keyring."
)

View File

@@ -0,0 +1,54 @@
# src/pkgmgr/core/credentials/store_keys.py
from __future__ import annotations
from dataclasses import dataclass
from typing import Optional
@dataclass(frozen=True)
class KeyringKey:
"""Keyring address for a token."""
service: str
username: str
def build_keyring_key(provider_kind: str, host: str, owner: Optional[str]) -> KeyringKey:
"""Build a stable keyring key.
- service: "pkgmgr:<provider>"
- username: "<host>|<owner>" or "<host>|-"
"""
provider_kind = str(provider_kind).strip().lower()
host = str(host).strip()
owner_part = (str(owner).strip() if owner else "-")
return KeyringKey(service=f"pkgmgr:{provider_kind}", username=f"{host}|{owner_part}")
def env_var_candidates(provider_kind: str, host: str, owner: Optional[str]) -> list[str]:
"""Return a list of environment variable names to try.
Order is from most specific to most generic.
"""
kind = re_sub_non_alnum(str(provider_kind).strip().upper())
host_norm = re_sub_non_alnum(str(host).strip().upper())
candidates: list[str] = []
if owner:
owner_norm = re_sub_non_alnum(str(owner).strip().upper())
candidates.append(f"PKGMGR_{kind}_TOKEN_{host_norm}_{owner_norm}")
candidates.append(f"PKGMGR_TOKEN_{kind}_{host_norm}_{owner_norm}")
candidates.append(f"PKGMGR_{kind}_TOKEN_{host_norm}")
candidates.append(f"PKGMGR_TOKEN_{kind}_{host_norm}")
candidates.append(f"PKGMGR_{kind}_TOKEN")
candidates.append(f"PKGMGR_TOKEN_{kind}")
candidates.append("PKGMGR_TOKEN")
return candidates
def re_sub_non_alnum(value: str) -> str:
"""Normalize to an uppercase env-var friendly token (A-Z0-9_)."""
import re
return re.sub(r"[^A-Z0-9]+", "_", value).strip("_")

View File

@@ -0,0 +1,34 @@
# src/pkgmgr/core/credentials/types.py
from __future__ import annotations
from dataclasses import dataclass
from typing import Optional
class CredentialError(RuntimeError):
"""Base class for credential resolution errors."""
class NoCredentialsError(CredentialError):
"""Raised when no usable credential could be resolved."""
class KeyringUnavailableError(CredentialError):
"""Raised when keyring is requested but no backend is available."""
@dataclass(frozen=True)
class TokenRequest:
"""Parameters describing which token we need."""
provider_kind: str # e.g. "gitea", "github"
host: str # e.g. "git.example.org" or "github.com"
owner: Optional[str] = None # optional org/user
@dataclass(frozen=True)
class TokenResult:
"""A resolved token plus metadata about its source."""
token: str
source: str # "env" | "keyring" | "prompt"

View File

@@ -0,0 +1,14 @@
# src/pkgmgr/core/remote_provisioning/__init__.py
"""Remote repository provisioning (ensure remote repo exists)."""
from .ensure import ensure_remote_repo
from .registry import ProviderRegistry
from .types import EnsureResult, ProviderHint, RepoSpec
__all__ = [
"ensure_remote_repo",
"RepoSpec",
"EnsureResult",
"ProviderHint",
"ProviderRegistry",
]

View File

@@ -0,0 +1,97 @@
# src/pkgmgr/core/remote_provisioning/ensure.py
from __future__ import annotations
from dataclasses import dataclass
from typing import Optional
from pkgmgr.core.credentials.resolver import ResolutionOptions, TokenResolver
from .http.errors import HttpError
from .registry import ProviderRegistry
from .types import (
AuthError,
EnsureResult,
NetworkError,
PermissionError,
ProviderHint,
RepoSpec,
UnsupportedProviderError,
)
@dataclass(frozen=True)
class EnsureOptions:
"""Options controlling remote provisioning."""
preview: bool = False
interactive: bool = True
allow_prompt: bool = True
save_prompt_token_to_keyring: bool = True
def _raise_mapped_http_error(exc: HttpError, host: str) -> None:
"""Map HttpError into domain-specific error types."""
if exc.status == 0:
raise NetworkError(f"Network error while talking to {host}: {exc}") from exc
if exc.status == 401:
raise AuthError(f"Authentication failed for {host} (401).") from exc
if exc.status == 403:
raise PermissionError(f"Permission denied for {host} (403).") from exc
raise NetworkError(
f"HTTP error from {host}: status={exc.status}, message={exc}, body={exc.body}"
) from exc
def ensure_remote_repo(
spec: RepoSpec,
provider_hint: Optional[ProviderHint] = None,
options: Optional[EnsureOptions] = None,
registry: Optional[ProviderRegistry] = None,
token_resolver: Optional[TokenResolver] = None,
) -> EnsureResult:
"""Ensure that the remote repository exists (create if missing).
- Uses TokenResolver (ENV -> keyring -> prompt)
- Selects provider via ProviderRegistry (or provider_hint override)
- Respects preview mode (no remote changes)
- Maps HTTP errors to domain-specific errors
"""
opts = options or EnsureOptions()
reg = registry or ProviderRegistry.default()
resolver = token_resolver or TokenResolver()
provider = reg.resolve(spec.host)
if provider_hint and provider_hint.kind:
forced = provider_hint.kind.strip().lower()
provider = next(
(p for p in reg.providers if getattr(p, "kind", "").lower() == forced),
None,
)
if provider is None:
raise UnsupportedProviderError(f"No provider matched host: {spec.host}")
token_opts = ResolutionOptions(
interactive=opts.interactive,
allow_prompt=opts.allow_prompt,
save_prompt_token_to_keyring=opts.save_prompt_token_to_keyring,
)
token = resolver.get_token(
provider_kind=getattr(provider, "kind", "unknown"),
host=spec.host,
owner=spec.owner,
options=token_opts,
)
if opts.preview:
return EnsureResult(
status="skipped",
message="Preview mode: no remote changes performed.",
)
try:
return provider.ensure_repo(token.token, spec)
except HttpError as exc:
_raise_mapped_http_error(exc, host=spec.host)
return EnsureResult(status="failed", message="Unreachable error mapping.")

View File

@@ -0,0 +1,5 @@
# src/pkgmgr/core/remote_provisioning/http/__init__.py
from .client import HttpClient, HttpResponse
from .errors import HttpError
__all__ = ["HttpClient", "HttpResponse", "HttpError"]

View File

@@ -0,0 +1,69 @@
# src/pkgmgr/core/remote_provisioning/http/client.py
from __future__ import annotations
import json
import ssl
import urllib.error
import urllib.request
from dataclasses import dataclass
from typing import Any, Dict, Optional
from .errors import HttpError
@dataclass(frozen=True)
class HttpResponse:
status: int
text: str
json: Optional[Dict[str, Any]] = None
class HttpClient:
"""Tiny HTTP client (stdlib) with JSON support."""
def __init__(self, timeout_s: int = 15) -> None:
self._timeout_s = int(timeout_s)
def request_json(
self,
method: str,
url: str,
headers: Optional[Dict[str, str]] = None,
payload: Optional[Dict[str, Any]] = None,
) -> HttpResponse:
data: Optional[bytes] = None
final_headers: Dict[str, str] = dict(headers or {})
if payload is not None:
data = json.dumps(payload).encode("utf-8")
final_headers.setdefault("Content-Type", "application/json")
req = urllib.request.Request(url=url, data=data, method=method.upper())
for k, v in final_headers.items():
req.add_header(k, v)
try:
with urllib.request.urlopen(
req,
timeout=self._timeout_s,
context=ssl.create_default_context(),
) as resp:
raw = resp.read().decode("utf-8", errors="replace")
parsed: Optional[Dict[str, Any]] = None
if raw:
try:
loaded = json.loads(raw)
parsed = loaded if isinstance(loaded, dict) else None
except Exception:
parsed = None
return HttpResponse(status=int(resp.status), text=raw, json=parsed)
except urllib.error.HTTPError as exc:
try:
body = exc.read().decode("utf-8", errors="replace")
except Exception:
body = ""
raise HttpError(status=int(exc.code), message=str(exc), body=body) from exc
except urllib.error.URLError as exc:
raise HttpError(status=0, message=str(exc), body="") from exc

View File

@@ -0,0 +1,9 @@
# src/pkgmgr/core/remote_provisioning/http/errors.py
from __future__ import annotations
class HttpError(RuntimeError):
def __init__(self, status: int, message: str, body: str = "") -> None:
super().__init__(message)
self.status = status
self.body = body

View File

@@ -0,0 +1,6 @@
# src/pkgmgr/core/remote_provisioning/providers/__init__.py
from .base import RemoteProvider
from .gitea import GiteaProvider
from .github import GitHubProvider
__all__ = ["RemoteProvider", "GiteaProvider", "GitHubProvider"]

View File

@@ -0,0 +1,36 @@
# src/pkgmgr/core/remote_provisioning/providers/base.py
from __future__ import annotations
from abc import ABC, abstractmethod
from ..types import EnsureResult, RepoSpec
class RemoteProvider(ABC):
"""Provider interface for remote repo provisioning."""
kind: str
@abstractmethod
def can_handle(self, host: str) -> bool:
"""Return True if this provider implementation matches the host."""
@abstractmethod
def repo_exists(self, token: str, spec: RepoSpec) -> bool:
"""Return True if repo exists and is accessible."""
@abstractmethod
def create_repo(self, token: str, spec: RepoSpec) -> EnsureResult:
"""Create a repository (owner may be user or org)."""
def ensure_repo(self, token: str, spec: RepoSpec) -> EnsureResult:
if self.repo_exists(token, spec):
return EnsureResult(status="exists", message="Repository exists.")
return self.create_repo(token, spec)
@staticmethod
def _api_base(host: str) -> str:
# Default to https. If you need http for local dev, store host as "http://..."
if host.startswith("http://") or host.startswith("https://"):
return host.rstrip("/")
return f"https://{host}".rstrip("/")

Some files were not shown because too many files have changed in this diff Show More