Compare commits

...

21 Commits

Author SHA1 Message Date
Kevin Veen-Birkenbach
f974e0b14a Release version 0.7.3 2025-12-09 16:08:34 +01:00
Kevin Veen-Birkenbach
de8c3f768d feat(repository): integrate ignore filtering into selection pipeline + add unit tests
This commit introduces proper handling of the `ignore: true` flag in the
repository selection mechanism and adds comprehensive unit tests for both
`ignored.py` and `selected.py`.

- `get_selected_repos()` now filters ignored repositories in all implicit
  selection modes:
    • filter-only mode (string/category/tag)
    • `--all` mode
    • CWD-based selection

- Explicit identifiers (e.g. `pkgmgr install ignored-repo`) **bypass**
  ignore filtering, so the user can still operate directly on ignored
  repositories if they ask for them explicitly.

- Added `_maybe_filter_ignored()` helper to handle logic cleanly and allow
  future extension (e.g. integrating a CLI flag `--include-ignored`).

Under `tests/unit/pkgmgr/core/repository`:

1. **test_ignored.py**
   • Ensures `filter_ignored()` removes repos with `ignore: true`
   • Ensures empty lists are handled correctly

2. **test_selected.py**
   Comprehensive coverage of the selection logic:
   • Explicit identifiers bypass ignore filtering
   • Filter-only mode excludes ignored repos unless `include_ignored=True`
   • `--all` mode excludes ignored repos unless explicitly overridden
   • CWD-based detection filters ignored repos unless explicitly overridden

Before this change, ignored repositories still appeared in `pkgmgr list`,
`pkgmgr status`, and other commands using `get_selected_repos()`.
This was unintuitive and broke the expected semantics of the `ignore`
attribute.
The new logic ensures ignored repositories are truly invisible unless
explicitly requested.

https://chatgpt.com/share/69383b41-50a0-800f-a2b9-c680cd96d9e9
2025-12-09 16:07:39 +01:00
Kevin Veen-Birkenbach
05ff250251 Release version 0.7.2 2025-12-09 15:49:01 +01:00
Kevin Veen-Birkenbach
ab52d37467 Refactor release helper into actions package and add RPM changelog support
- Move the monolithic pkgmgr/actions/release.py implementation into the
  pkgmgr.actions.release package, splitting concerns into versioning,
  git_ops and files helpers.
- Extend the release orchestration to update Fedora/RPM %changelog
  entries via update_spec_changelog(), reusing the same effective
  release message as for CHANGELOG.md and debian/changelog.
- Wire the new update_spec_changelog() helper into _release_impl() so
  every release keeps project, Debian and RPM metadata in sync.
- Add unit tests for update_spec_changelog() and for the updated release
  orchestration behaviour in preview and real modes.
- Remove the deprecated pkgmgr/actions/release.py module.

See ChatGPT discussion: https://chatgpt.com/share/6938368e-0940-800f-92d3-f2ccfddab794
2025-12-09 15:47:37 +01:00
Kevin Veen-Birkenbach
80329b85fb Release version 0.7.1 2025-12-09 15:26:56 +01:00
Kevin Veen-Birkenbach
44ff0a6cd9 Release version 0.7.0 2025-12-09 15:21:06 +01:00
Kevin Veen-Birkenbach
e00b1a7b69 Solved import bug 2025-12-09 15:03:31 +01:00
Kevin Veen-Birkenbach
14f0188efd Solved e2e naming bugs 2025-12-09 15:02:04 +01:00
Kevin Veen-Birkenbach
a4efb847ba Cleaned Up tests 2025-12-09 14:33:32 +01:00
Kevin Veen-Birkenbach
d50891dfe5 Refactor: Restructure pkgmgr into actions/, core/, and cli/ (full module breakup)
This commit introduces a large-scale structural refactor of the pkgmgr
codebase. All functionality has been moved from the previous flat
top-level layout into three clearly separated namespaces:

  • pkgmgr.actions      – high-level operations invoked by the CLI
  • pkgmgr.core         – pure logic, helpers, repository utilities,
                          versioning, git helpers, config IO, and
                          command resolution
  • pkgmgr.cli          – parser, dispatch, context, and command
                          handlers

Key improvements:
  - Moved all “branch”, “release”, “changelog”, repo-management
    actions, installer pipelines, and proxy execution logic into
    pkgmgr.actions.<domain>.
  - Reworked installer structure under
        pkgmgr.actions.repository.install.installers
    including OS-package installers, Nix, Python, and Makefile.
  - Consolidated all low-level functionality under pkgmgr.core:
        • git helpers → core/git
        • config load/save → core/config
        • repository helpers → core/repository
        • versioning & semver → core/version
        • command helpers (alias, resolve, run, ink) → core/command
  - Replaced pkgmgr.cli_core with pkgmgr.cli and updated all imports.
  - Added minimal __init__.py files for clean package exposure.
  - Updated all E2E, integration, and unit tests with new module paths.
  - Fixed patch targets so mocks point to the new structure.
  - Ensured backward compatibility at the CLI boundary (pkgmgr entry point unchanged).

This refactor produces a cleaner, layered architecture:
  - `core` = logic
  - `actions` = orchestrated behaviour
  - `cli` = user interface

Reference: ChatGPT-assisted refactor discussion
https://chatgpt.com/share/6938221c-e24c-800f-8317-7732cedf39b9
2025-12-09 14:20:19 +01:00
Kevin Veen-Birkenbach
59d0355b91 Release version 0.6.0 2025-12-09 05:59:58 +01:00
Kevin Veen-Birkenbach
da9d5cfa6b Fix container tests, unify RPM install path, and ensure Nix TLS truststore detection
Changes included:
• GitHub Actions workflow: rename job from 'test-unit' to 'test-container' to match intent.
• RPM packaging: replace %{_libdir}/package-manager with a fixed /usr/lib/package-manager
  to avoid lib/lib64 divergence on CentOS and ensure pkgmgr + Nix flake resolution works
  consistently across distros.
• Docker entrypoint: add automatic CA-bundle detection and set NIX_SSL_CERT_FILE to fix
  TLS issues on CentOS ('unable to get local issuer certificate') when Nix fetches flake
  inputs.

These updates stabilize container-based tests and unify the runtime environment
for Fedora, CentOS, and other distributions.

Reference:
ChatGPT conversation: https://chatgpt.com/share/6937aa72-d33c-800f-a63f-c353e92de6b3
2025-12-09 05:50:08 +01:00
Kevin Veen-Birkenbach
f9943fafae Refactor container build and installation pipeline to use configurable Makefile parameters (e.g. DISTROS, base images) and propagate them through all build, install, and test scripts 2025-12-09 05:31:55 +01:00
Kevin Veen-Birkenbach
7d73007181 Release version 0.5.1 2025-12-09 01:21:31 +01:00
Kevin Veen-Birkenbach
c8462fefa4 Release version 0.5.0 2025-12-09 00:44:16 +01:00
Kevin Veen-Birkenbach
00a1f373ce Merge branch 'feature/config_v2.0' 2025-12-09 00:29:19 +01:00
Kevin Veen-Birkenbach
9f9f2e68c0 Release version 0.4.3 2025-12-09 00:29:08 +01:00
Kevin Veen-Birkenbach
d25dcb05e4 Merge branch 'feature/branch_close' 2025-12-09 00:03:56 +01:00
Kevin Veen-Birkenbach
e135d39710 Release version 0.4.2 2025-12-09 00:03:46 +01:00
Kevin Veen-Birkenbach
76b7f84989 Release version 0.4.1 2025-12-08 23:20:28 +01:00
Kevin Veen-Birkenbach
8ea7ff23e9 Release version 0.3.0 2025-12-08 22:40:50 +01:00
168 changed files with 5645 additions and 2643 deletions

25
.github/workflows/test-container.yml vendored Normal file
View File

@@ -0,0 +1,25 @@
name: Test Distribution Containers
on:
push:
branches:
- main
- master
- develop
- "*"
pull_request:
jobs:
test-container:
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Show Docker version
run: docker version
- name: Run container tests
run: make test-container

11
.gitignore vendored
View File

@@ -15,7 +15,7 @@ venv/
# Build artifacts
dist/
build/
build/*
*.egg-info/
# Editor files
@@ -31,4 +31,11 @@ Thumbs.db
# Ignore logs
*.log
package-manager-*
package-manager-*
# debian
debian/package-manager/
debian/debhelper-build-stamp
debian/files
debian/.debhelper/
debian/package-manager.substvars

View File

@@ -1,7 +1,68 @@
## [0.7.3] - 2025-12-09
* Fixed bug: Ignored packages are now ignored
## [0.7.2] - 2025-12-09
* Implemented Changelog Support for Fedora and Debian
## [0.7.1] - 2025-12-09
* Fix floating 'latest' tag logic: dereference annotated target (vX.Y.Z^{}), add tag message to avoid Git errors, ensure best-effort update without blocking releases, and update unit tests (see ChatGPT conversation: https://chatgpt.com/share/69383024-efa4-800f-a875-129b81fa40ff).
## [0.7.0] - 2025-12-09
* Add Git helpers for branch sync and floating 'latest' tag in the release workflow, ensure main/master are updated from origin before tagging, and extend unit/e2e tests including 'pkgmgr release --help' coverage (see ChatGPT conversation: https://chatgpt.com/share/69383024-efa4-800f-a875-129b81fa40ff)
## [0.6.0] - 2025-12-09
* Expose DISTROS and BASE_IMAGE_* variables as exported Makefile environment variables so all build and test commands can consume them dynamically. By exporting these values, every Make target (e.g., build, build-no-cache, build-missing, test-container, test-unit, test-e2e) and every delegated script in scripts/build/ and scripts/test/ now receives a consistent view of the supported distributions and their base container images. This change removes duplicated definitions across scripts, ensures reproducible builds, and allows build tooling to react automatically when new distros or base images are added to the Makefile.
## [0.5.1] - 2025-12-09
* Refine pkgmgr release CLI close wiring and integration tests for --close flag (ChatGPT: https://chatgpt.com/share/69376b4e-8440-800f-9d06-535ec1d7a40e)
## [0.5.0] - 2025-12-09
* Add pkgmgr branch close subcommand, extend CLI parser wiring, and add unit tests for branch handling and version version-selection logic (see ChatGPT conversation: https://chatgpt.com/share/693762a3-9ea8-800f-a640-bc78170953d1)
## [0.4.3] - 2025-12-09
* Implement current-directory repository selection for release and proxy commands, unify selection semantics across CLI layers, extend release workflow with --close, integrate branch closing logic, fix wiring for get_repo_identifier/get_repo_dir, update packaging files (PKGBUILD, spec, flake.nix, pyproject), and add comprehensive unit/e2e tests for release and branch commands (see ChatGPT conversation: https://chatgpt.com/share/69375cfe-9e00-800f-bd65-1bd5937e1696)
## [0.4.2] - 2025-12-09
* Wire pkgmgr release CLI to new helper and add unit tests (see ChatGPT conversation: https://chatgpt.com/share/69374f09-c760-800f-92e4-5b44a4510b62)
## [0.4.1] - 2025-12-08
* Add branch close subcommand and integrate release close/editor flow (ChatGPT: https://chatgpt.com/share/69374f09-c760-800f-92e4-5b44a4510b62)
## [0.4.0] - 2025-12-08
* Add branch closing helper and --close flag to release command, including CLI wiring and tests (see https://chatgpt.com/share/69374aec-74ec-800f-bde3-5d91dfdb9b91)
## [0.3.0] - 2025-12-08
* Massive refactor and feature expansion:
- Complete rewrite of config loading system (layered defaults + user config)
- New selection engine (--string, --category, --tag)
- Overhauled list output (colored statuses, alias highlight)
- New config update logic + default YAML sync
- Improved proxy command handling
- Full CLI routing refactor
- Expanded E2E tests for list, proxy, and selection logic
Konversation: https://chatgpt.com/share/693745c3-b8d8-800f-aa29-c8481a2ffae1
## [0.2.0] - 2025-12-08

View File

@@ -4,87 +4,6 @@
ARG BASE_IMAGE=archlinux:latest
FROM ${BASE_IMAGE}
# ------------------------------------------------------------
# System base + conditional package tool installation
#
# Important:
# - We do NOT install Nix directly here via curl.
# - Nix is installed/initialized by init-nix.sh, which is invoked
# from the system packaging hooks (Arch .install, Debian postinst,
# RPM %post).
# ------------------------------------------------------------
RUN set -e; \
if [ -f /etc/os-release ]; then . /etc/os-release; else echo "No /etc/os-release found" && exit 1; fi; \
echo "Detected base image: ${ID:-unknown} (like: ${ID_LIKE:-})"; \
\
if [ "$ID" = "arch" ]; then \
pacman -Syu --noconfirm && \
pacman -S --noconfirm --needed \
base-devel \
git \
rsync \
curl \
ca-certificates \
xz && \
pacman -Scc --noconfirm; \
elif [ "$ID" = "debian" ]; then \
apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
build-essential \
debhelper \
dpkg-dev \
git \
rsync \
bash \
curl \
ca-certificates \
xz-utils && \
rm -rf /var/lib/apt/lists/*; \
elif [ "$ID" = "ubuntu" ]; then \
apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
build-essential \
debhelper \
dpkg-dev \
git \
tzdata \
lsb-release \
rsync \
bash \
curl \
ca-certificates \
xz-utils && \
rm -rf /var/lib/apt/lists/*; \
elif [ "$ID" = "fedora" ]; then \
dnf -y update && \
dnf -y install \
git \
rsync \
rpm-build \
make \
gcc \
bash \
curl \
ca-certificates \
xz && \
dnf clean all; \
elif [ "$ID" = "centos" ]; then \
dnf -y update && \
dnf -y install \
git \
rsync \
rpm-build \
make \
gcc \
bash \
curl-minimal \
ca-certificates \
xz && \
dnf clean all; \
else \
echo "Unsupported base image: ${ID}" && exit 1; \
fi
# ------------------------------------------------------------
# Nix environment defaults
#
@@ -96,94 +15,38 @@ ENV NIX_CONFIG="experimental-features = nix-command flakes"
# ------------------------------------------------------------
# Unprivileged user for Arch package build (makepkg)
# ------------------------------------------------------------
RUN useradd -m builder || true
RUN useradd -m aur_builder || true
# ------------------------------------------------------------
# Copy scripts and install distro dependencies
# ------------------------------------------------------------
WORKDIR /build
# Copy only scripts first so dependency installation can run early
COPY scripts/ scripts/
RUN find scripts -type f -name '*.sh' -exec chmod +x {} \;
# Install distro-specific build dependencies (and AUR builder on Arch)
RUN scripts/installation/run-dependencies.sh
# ------------------------------------------------------------
# Select distro-specific Docker entrypoint
# ------------------------------------------------------------
# Docker entrypoint (distro-agnostic, nutzt run-package.sh)
# ------------------------------------------------------------
COPY scripts/docker/entry.sh /usr/local/bin/docker-entry.sh
RUN chmod +x /usr/local/bin/docker-entry.sh
# ------------------------------------------------------------
# Build and install distro-native package-manager package
#
# - Arch: PKGBUILD -> pacman -U
# - Debian: debhelper -> dpkg-buildpackage -> apt install ./package-manager_*.deb
# - Ubuntu: same as Debian
# - Fedora: rpmbuild -> dnf/dnf5/yum install package-manager-*.rpm
# - CentOS: rpmbuild -> dnf/yum install package-manager-*.rpm
#
# Nix is NOT manually installed here; it is handled by init-nix.sh.
# via Makefile `install` target (calls scripts/installation/run-package.sh)
# ------------------------------------------------------------
WORKDIR /build
COPY . .
RUN find scripts -type f -name '*.sh' -exec chmod +x {} \;
RUN set -e; \
. /etc/os-release; \
if [ "$ID" = "arch" ]; then \
echo 'Building Arch package (makepkg --nodeps)...'; \
chown -R builder:builder /build; \
su builder -c "cd /build && rm -f package-manager-*.pkg.tar.* && makepkg --noconfirm --clean --nodeps"; \
\
echo 'Installing generated Arch package...'; \
pacman -U --noconfirm package-manager-*.pkg.tar.*; \
elif [ "$ID" = "debian" ] || [ "$ID" = "ubuntu" ]; then \
echo 'Building Debian/Ubuntu package...'; \
dpkg-buildpackage -us -uc -b; \
\
echo 'Installing generated DEB package...'; \
apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y ./../package-manager_*.deb && \
rm -rf /var/lib/apt/lists/*; \
elif [ "$ID" = "fedora" ] || [ "$ID" = "centos" ]; then \
echo 'Setting up rpmbuild dirs...'; \
mkdir -p /root/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}; \
\
echo "Extracting version from package-manager.spec..."; \
version=$(grep -E '^Version:' /build/package-manager.spec | awk '{print $2}'); \
if [ -z "$version" ]; then echo 'ERROR: Version missing!' && exit 1; fi; \
srcdir="package-manager-${version}"; \
\
echo "Preparing source tree for RPM: $srcdir"; \
rm -rf "/tmp/$srcdir"; \
mkdir -p "/tmp/$srcdir"; \
cp -a /build/. "/tmp/$srcdir/"; \
\
echo "Creating source tarball: /root/rpmbuild/SOURCES/$srcdir.tar.gz"; \
tar czf "/root/rpmbuild/SOURCES/$srcdir.tar.gz" -C /tmp "$srcdir"; \
\
echo 'Copying SPEC...'; \
cp /build/package-manager.spec /root/rpmbuild/SPECS/; \
\
echo 'Running rpmbuild...'; \
cd /root/rpmbuild/SPECS && rpmbuild -bb package-manager.spec; \
\
echo 'Installing generated RPM (local, offline)...'; \
rpm_path=$(find /root/rpmbuild/RPMS -name "package-manager-*.rpm" | head -n1); \
if [ -z "$rpm_path" ]; then echo 'ERROR: RPM not found!' && exit 1; fi; \
\
if command -v dnf5 >/dev/null 2>&1; then \
echo 'Using dnf5 to install local RPM (no remote repos)...'; \
if ! dnf5 install -y --disablerepo='*' "$rpm_path"; then \
echo 'dnf5 failed, falling back to rpm -i --nodeps'; \
rpm -i --nodeps "$rpm_path"; \
fi; \
elif command -v dnf >/dev/null 2>&1; then \
echo 'Using dnf to install local RPM (no remote repos)...'; \
if ! dnf install -y --disablerepo='*' "$rpm_path"; then \
echo 'dnf failed, falling back to rpm -i --nodeps'; \
rpm -i --nodeps "$rpm_path"; \
fi; \
elif command -v yum >/dev/null 2>&1; then \
echo 'Using yum to install local RPM (no remote repos)...'; \
if ! yum localinstall -y --disablerepo='*' "$rpm_path"; then \
echo 'yum failed, falling back to rpm -i --nodeps'; \
rpm -i --nodeps "$rpm_path"; \
fi; \
else \
echo 'No dnf/dnf5/yum found, falling back to rpm -i --nodeps...'; \
rpm -i --nodeps "$rpm_path"; \
fi; \
\
rm -rf "/tmp/$srcdir"; \
else \
echo "Unsupported distro: ${ID}"; \
exit 1; \
fi; \
echo "Building and installing package-manager via make install..."; \
make install; \
rm -rf /build
# ------------------------------------------------------------
@@ -191,8 +54,5 @@ RUN set -e; \
# ------------------------------------------------------------
WORKDIR /src
COPY scripts/docker-entry-dev.sh /usr/local/bin/docker-entry-dev.sh
RUN chmod +x /usr/local/bin/docker-entry-dev.sh
ENTRYPOINT ["/usr/local/bin/docker-entry-dev.sh"]
CMD ["--help"]
ENTRYPOINT ["/usr/local/bin/docker-entry.sh"]
CMD ["pkgmgr", "--help"]

278
Makefile
View File

@@ -1,5 +1,6 @@
.PHONY: install setup uninstall aur_builder_setup \
test build build-no-cache test-unit test-e2e test-integration
.PHONY: install setup uninstall \
test build build-no-cache test-unit test-e2e test-integration \
test-container
# ------------------------------------------------------------
# Local Nix cache directories in the repo
@@ -9,267 +10,72 @@ NIX_CACHE_VOLUME := pkgmgr_nix_cache
# ------------------------------------------------------------
# Distro list and base images
# (kept for documentation/reference; actual build logic is in scripts/build)
# ------------------------------------------------------------
DISTROS := arch debian ubuntu fedora centos
BASE_IMAGE_ARCH := archlinux:latest
BASE_IMAGE_DEBIAN := debian:stable-slim
BASE_IMAGE_UBUNTU := ubuntu:latest
BASE_IMAGE_FEDORA := fedora:latest
BASE_IMAGE_CENTOS := quay.io/centos/centos:stream9
BASE_IMAGE_arch := archlinux:latest
BASE_IMAGE_debian := debian:stable-slim
BASE_IMAGE_ubuntu := ubuntu:latest
BASE_IMAGE_fedora := fedora:latest
BASE_IMAGE_centos := quay.io/centos/centos:stream9
# Helper to echo which image is used for which distro (purely informational)
define echo_build_info
@echo "Building image for distro '$(1)' with base image '$(2)'..."
endef
# Make them available in scripts
export DISTROS
export BASE_IMAGE_ARCH
export BASE_IMAGE_DEBIAN
export BASE_IMAGE_UBUNTU
export BASE_IMAGE_FEDORA
export BASE_IMAGE_CENTOS
# ------------------------------------------------------------
# PKGMGR setup (wrapper)
# PKGMGR setup (developer wrapper -> scripts/installation/main.sh)
# ------------------------------------------------------------
setup: install
@echo "Running pkgmgr setup via main.py..."
@if [ -x "$$HOME/.venvs/pkgmgr/bin/python" ]; then \
echo "Using virtualenv Python at $$HOME/.venvs/pkgmgr/bin/python"; \
"$$HOME/.venvs/pkgmgr/bin/python" main.py install; \
else \
echo "Virtualenv not found, falling back to system python3"; \
python3 main.py install; \
fi
setup:
@bash scripts/installation/main.sh
# ------------------------------------------------------------
# Docker build targets: build all images
# Docker build targets (delegated to scripts/build)
# ------------------------------------------------------------
build-no-cache:
@for distro in $(DISTROS); do \
case "$$distro" in \
arch) base_image="$(BASE_IMAGE_arch)" ;; \
debian) base_image="$(BASE_IMAGE_debian)" ;; \
ubuntu) base_image="$(BASE_IMAGE_ubuntu)" ;; \
fedora) base_image="$(BASE_IMAGE_fedora)" ;; \
centos) base_image="$(BASE_IMAGE_centos)" ;; \
*) echo "Unknown distro '$$distro'" >&2; exit 1 ;; \
esac; \
echo "Building test image 'package-manager-test-$$distro' with no cache (BASE_IMAGE=$$base_image)..."; \
docker build --no-cache \
--build-arg BASE_IMAGE="$$base_image" \
-t "package-manager-test-$$distro" . || exit $$?; \
done
@bash scripts/build/build-image-no-cache.sh
build:
@for distro in $(DISTROS); do \
case "$$distro" in \
arch) base_image="$(BASE_IMAGE_arch)" ;; \
debian) base_image="$(BASE_IMAGE_debian)" ;; \
ubuntu) base_image="$(BASE_IMAGE_ubuntu)" ;; \
fedora) base_image="$(BASE_IMAGE_fedora)" ;; \
centos) base_image="$(BASE_IMAGE_centos)" ;; \
*) echo "Unknown distro '$$distro'" >&2; exit 1 ;; \
esac; \
echo "Building test image 'package-manager-test-$$distro' (BASE_IMAGE=$$base_image)..."; \
docker build \
--build-arg BASE_IMAGE="$$base_image" \
-t "package-manager-test-$$distro" . || exit $$?; \
done
build-arch:
@base_image="$(BASE_IMAGE_arch)"; \
echo "Building test image 'package-manager-test-arch' (BASE_IMAGE=$$base_image)..."; \
docker build \
--build-arg BASE_IMAGE="$$base_image" \
-t "package-manager-test-arch" . || exit $$?;
@bash scripts/build/build-image.sh
# ------------------------------------------------------------
# Test targets
# Test targets (delegated to scripts/test)
# ------------------------------------------------------------
# Unit tests: only in Arch container (fastest feedback), via Nix devShell
test-unit: build-arch
@echo "============================================================"
@echo ">>> Running UNIT tests in Arch container (via Nix devShell)"
@echo "============================================================"
docker run --rm \
-v "$$(pwd):/src" \
--workdir /src \
--entrypoint bash \
"package-manager-test-arch" \
-c '\
set -e; \
if [ -f /etc/os-release ]; then . /etc/os-release; fi; \
echo "Detected container distro: $${ID:-unknown} (like: $${ID_LIKE:-})"; \
echo "Running Python unit tests (tests/unit) via nix develop..."; \
git config --global --add safe.directory /src || true; \
cd /src; \
nix develop .#default --no-write-lock-file -c \
python -m unittest discover \
-s tests/unit \
-t /src \
-p "test_*.py"; \
'
test-unit:
@bash scripts/test/test-unit.sh
# Integration tests: also in Arch container, via Nix devShell
test-integration: build-arch
@echo "============================================================"
@echo ">>> Running INTEGRATION tests in Arch container (via Nix devShell)"
@echo "============================================================"
docker run --rm \
-v "$$(pwd):/src" \
--workdir /src \
--entrypoint bash \
"package-manager-test-arch" \
-c '\
set -e; \
if [ -f /etc/os-release ]; then . /etc/os-release; fi; \
echo "Detected container distro: $${ID:-unknown} (like: $${ID_LIKE:-})"; \
echo "Running Python integration tests (tests/integration) via nix develop..."; \
git config --global --add safe.directory /src || true; \
cd /src; \
nix develop .#default --no-write-lock-file -c \
python -m unittest discover \
-s tests/integration \
-t /src \
-p "test_*.py"; \
'
test-integration:
@bash scripts/test/test-integration.sh
# End-to-end tests: run in all distros via Nix devShell (tests/e2e)
test-e2e: build
@echo "Ensuring Docker Nix volumes exist (auto-created if missing)..."
@echo "Running E2E tests inside Nix devShell with cached store for all distros: $(DISTROS)"
test-e2e:
@bash scripts/test/test-e2e.sh
@for distro in $(DISTROS); do \
echo "============================================================"; \
echo ">>> Running E2E tests in container for distro: $$distro"; \
echo "============================================================"; \
# Only for Arch: mount /nix as volume, for others use image-installed Nix \
if [ "$$distro" = "arch" ]; then \
NIX_STORE_MOUNT="-v $(NIX_STORE_VOLUME):/nix"; \
else \
NIX_STORE_MOUNT=""; \
fi; \
docker run --rm \
-v "$$(pwd):/src" \
$$NIX_STORE_MOUNT \
-v "$(NIX_CACHE_VOLUME):/root/.cache/nix" \
--workdir /src \
--entrypoint bash \
"package-manager-test-$$distro" \
-c '\
set -e; \
if [ -f /etc/os-release ]; then . /etc/os-release; fi; \
echo "Detected container distro: $${ID:-unknown} (like: $${ID_LIKE:-})"; \
echo "Preparing Nix environment..."; \
if [ -f "/nix/var/nix/profiles/default/etc/profile.d/nix-daemon.sh" ]; then \
. "/nix/var/nix/profiles/default/etc/profile.d/nix-daemon.sh"; \
fi; \
if [ -f "$$HOME/.nix-profile/etc/profile.d/nix.sh" ]; then \
. "$$HOME/.nix-profile/etc/profile.d/nix.sh"; \
fi; \
PATH="/nix/var/nix/profiles/default/bin:$$HOME/.nix-profile/bin:$$PATH"; \
export PATH; \
echo "PATH is now:"; \
echo "$$PATH"; \
NIX_CMD=""; \
if command -v nix >/dev/null 2>&1; then \
echo "Found nix on PATH:"; \
command -v nix; \
NIX_CMD="nix"; \
else \
echo "nix not found on PATH, scanning /nix/store for a nix binary..."; \
for path in /nix/store/*-nix-*/bin/nix; do \
if [ -x "$$path" ]; then \
echo "Found nix binary at $$path"; \
NIX_CMD="$$path"; \
break; \
fi; \
done; \
fi; \
if [ -z "$$NIX_CMD" ]; then \
echo "ERROR: nix binary not found anywhere cannot run devShell"; \
exit 1; \
fi; \
echo "Using Nix command: $$NIX_CMD"; \
echo "Run E2E tests inside Nix devShell (tests/e2e)..."; \
git config --global --add safe.directory /src || true; \
cd /src; \
"$$NIX_CMD" develop .#default --no-write-lock-file -c \
python3 -m unittest discover \
-s /src/tests/e2e \
-p "test_*.py"; \
' || exit $$?; \
done
test-container:
@bash scripts/test/test-container.sh
# ------------------------------------------------------------
# Build only missing container images
# ------------------------------------------------------------
build-missing:
@bash scripts/build/build-image-missing.sh
# Combined test target for local + CI (unit + e2e + integration)
test: build test-unit test-e2e test-integration
test: build-missing test-container test-unit test-e2e test-integration
# ------------------------------------------------------------
# Installer for host systems (original logic)
# System install (native packages, calls scripts/installation/run-package.sh)
# ------------------------------------------------------------
install:
@if [ -n "$$IN_NIX_SHELL" ]; then \
echo "Nix shell detected (IN_NIX_SHELL=1). Skipping venv/pip install handled by Nix flake."; \
else \
echo "Making 'main.py' executable..."; \
chmod +x main.py; \
echo "Checking if global user virtual environment exists..."; \
mkdir -p "$$HOME/.venvs"; \
if [ ! -d "$$HOME/.venvs/pkgmgr" ]; then \
echo "Creating global venv at $$HOME/.venvs/pkgmgr..."; \
python3 -m venv "$$HOME/.venvs/pkgmgr"; \
fi; \
echo "Installing required Python packages into $$HOME/.venvs/pkgmgr..."; \
"$$HOME/.venvs/pkgmgr/bin/python" -m ensurepip --upgrade; \
"$$HOME/.venvs/pkgmgr/bin/pip" install --upgrade pip setuptools wheel; \
echo "Looking for requirements.txt / _requirements.txt..."; \
if [ -f requirements.txt ]; then \
echo "Installing Python packages from requirements.txt..."; \
"$$HOME/.venvs/pkgmgr/bin/pip" install -r requirements.txt; \
elif [ -f _requirements.txt ]; then \
echo "Installing Python packages from _requirements.txt..."; \
"$$HOME/.venvs/pkgmgr/bin/pip" install -r _requirements.txt; \
else \
echo "No requirements.txt or _requirements.txt found, skipping dependency installation."; \
fi; \
echo "Ensuring $$HOME/.bashrc and $$HOME/.zshrc exist..."; \
touch "$$HOME/.bashrc" "$$HOME/.zshrc"; \
echo "Ensuring automatic activation of $$HOME/.venvs/pkgmgr for this user..."; \
for rc in "$$HOME/.bashrc" "$$HOME/.zshrc"; do \
rc_line='if [ -d "$${HOME}/.venvs/pkgmgr" ]; then . "$${HOME}/.venvs/pkgmgr/bin/activate"; if [ -n "$${PS1:-}" ]; then echo "Global Python virtual environment '\''~/.venvs/pkgmgr'\'' activated."; fi; fi'; \
grep -qxF "$${rc_line}" "$$rc" || echo "$${rc_line}" >> "$$rc"; \
done; \
echo "Arch/Manjaro detection and optional AUR setup..."; \
if command -v pacman >/dev/null 2>&1; then \
$(MAKE) aur_builder_setup; \
else \
echo "Not Arch-based (no pacman). Skipping aur_builder/yay setup."; \
fi; \
echo "Installation complete. Please restart your shell (or 'exec bash' or 'exec zsh') for the changes to take effect."; \
fi
# ------------------------------------------------------------
# AUR builder setup — only on Arch/Manjaro
# ------------------------------------------------------------
aur_builder_setup:
@echo "Setting up aur_builder and yay (Arch/Manjaro)..."
@sudo pacman -Syu --noconfirm
@sudo pacman -S --needed --noconfirm base-devel git sudo
@if ! getent group aur_builder >/dev/null; then sudo groupadd -r aur_builder; fi
@if ! id -u aur_builder >/dev/null 2>&1; then sudo useradd -m -r -g aur_builder -s /bin/bash aur_builder; fi
@echo '%aur_builder ALL=(ALL) NOPASSWD: /usr/bin/pacman' | sudo tee /etc/sudoers.d/aur_builder >/dev/null
@sudo chmod 0440 /etc/sudoers.d/aur_builder
@if ! sudo -u aur_builder bash -lc 'command -v yay >/dev/null'; then \
sudo -u aur_builder bash -lc 'cd ~ && rm -rf yay && git clone https://aur.archlinux.org/yay.git && cd yay && makepkg -si --noconfirm'; \
else \
echo "yay already installed."; \
fi
@echo "aur_builder/yay setup complete."
@echo "Building and installing distro-native package-manager for this system..."
@bash scripts/installation/run-package.sh
# ------------------------------------------------------------
# Uninstall target
# ------------------------------------------------------------
uninstall:
@echo "Removing global user virtual environment if it exists..."
@rm -rf "$$HOME/.venvs/pkgmgr"
@echo "Cleaning up $$HOME/.bashrc and $$HOME/.zshrc entries..."
@for rc in "$$HOME/.bashrc" "$$HOME/.zshrc"; do \
sed -i '/\.venvs\/pkgmgr\/bin\/activate"; if \[ -n "\$${PS1:-}" \]; then echo "Global Python virtual environment '\''~\/\.venvs\/pkgmgr'\'' activated."; fi; fi/d' "$$rc"; \
done
@echo "Uninstallation complete. Please restart your shell (or 'exec bash' or 'exec zsh') for the changes to fully apply."
@bash scripts/uninstall.sh

View File

@@ -1,7 +1,7 @@
# Maintainer: Kevin Veen-Birkenbach <info@veen.world>
pkgname=package-manager
pkgver=0.4.0
pkgver=0.7.3
pkgrel=1
pkgdesc="Local-flake wrapper for Kevin's package-manager (Nix-based)."
arch=('any')

7
config/wip.yml Normal file
View File

@@ -0,0 +1,7 @@
- account: kevinveenbirkenbach
alias: gkfdrtdtcntr
provider: github.com
repository: federated-to-central-social-network-bridge
verified:
gpg_keys:
- 44D8F11FD62F878E

74
debian/changelog vendored
View File

@@ -1,9 +1,83 @@
package-manager (0.7.3-1) unstable; urgency=medium
* Fixed bug: Ignored packages are now ignored
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 16:08:31 +0100
package-manager (0.7.2-1) unstable; urgency=medium
* Implemented Changelog Support for Fedora and Debian
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 15:48:58 +0100
package-manager (0.7.1-1) unstable; urgency=medium
* Fix floating 'latest' tag logic: dereference annotated target (vX.Y.Z^{}), add tag message to avoid Git errors, ensure best-effort update without blocking releases, and update unit tests (see ChatGPT conversation: https://chatgpt.com/share/69383024-efa4-800f-a875-129b81fa40ff).
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 15:26:54 +0100
package-manager (0.7.0-1) unstable; urgency=medium
* Add Git helpers for branch sync and floating 'latest' tag in the release workflow, ensure main/master are updated from origin before tagging, and extend unit/e2e tests including 'pkgmgr release --help' coverage (see ChatGPT conversation: https://chatgpt.com/share/69383024-efa4-800f-a875-129b81fa40ff)
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 15:21:03 +0100
package-manager (0.6.0-1) unstable; urgency=medium
* Expose DISTROS and BASE_IMAGE_* variables as exported Makefile environment variables so all build and test commands can consume them dynamically. By exporting these values, every Make target (e.g., build, build-no-cache, build-missing, test-container, test-unit, test-e2e) and every delegated script in scripts/build/ and scripts/test/ now receives a consistent view of the supported distributions and their base container images. This change removes duplicated definitions across scripts, ensures reproducible builds, and allows build tooling to react automatically when new distros or base images are added to the Makefile.
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 05:59:58 +0100
package-manager (0.5.1-1) unstable; urgency=medium
* Refine pkgmgr release CLI close wiring and integration tests for --close flag (ChatGPT: https://chatgpt.com/share/69376b4e-8440-800f-9d06-535ec1d7a40e)
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 01:21:31 +0100
package-manager (0.5.0-1) unstable; urgency=medium
* Add pkgmgr branch close subcommand, extend CLI parser wiring, and add unit tests for branch handling and version version-selection logic (see ChatGPT conversation: https://chatgpt.com/share/693762a3-9ea8-800f-a640-bc78170953d1)
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 00:44:16 +0100
package-manager (0.4.3-1) unstable; urgency=medium
* Implement current-directory repository selection for release and proxy commands, unify selection semantics across CLI layers, extend release workflow with --close, integrate branch closing logic, fix wiring for get_repo_identifier/get_repo_dir, update packaging files (PKGBUILD, spec, flake.nix, pyproject), and add comprehensive unit/e2e tests for release and branch commands (see ChatGPT conversation: https://chatgpt.com/share/69375cfe-9e00-800f-bd65-1bd5937e1696)
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 00:29:08 +0100
package-manager (0.4.2-1) unstable; urgency=medium
* Wire pkgmgr release CLI to new helper and add unit tests (see ChatGPT conversation: https://chatgpt.com/share/69374f09-c760-800f-92e4-5b44a4510b62)
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 00:03:46 +0100
package-manager (0.4.1-1) unstable; urgency=medium
* Add branch close subcommand and integrate release close/editor flow (ChatGPT: https://chatgpt.com/share/69374f09-c760-800f-92e4-5b44a4510b62)
-- Kevin Veen-Birkenbach <kevin@veen.world> Mon, 08 Dec 2025 23:20:28 +0100
package-manager (0.4.0-1) unstable; urgency=medium
* Add branch closing helper and --close flag to release command, including CLI wiring and tests (see https://chatgpt.com/share/69374aec-74ec-800f-bde3-5d91dfdb9b91)
-- Kevin Veen-Birkenbach <kevin@veen.world> Mon, 08 Dec 2025 23:02:43 +0100
package-manager (0.3.0-1) unstable; urgency=medium
* Massive refactor and feature expansion:
- Complete rewrite of config loading system (layered defaults + user config)
- New selection engine (--string, --category, --tag)
- Overhauled list output (colored statuses, alias highlight)
- New config update logic + default YAML sync
- Improved proxy command handling
- Full CLI routing refactor
- Expanded E2E tests for list, proxy, and selection logic
Konversation: https://chatgpt.com/share/693745c3-b8d8-800f-aa29-c8481a2ffae1
-- Kevin Veen-Birkenbach <kevin@veen.world> Mon, 08 Dec 2025 22:40:49 +0100
package-manager (0.2.0-1) unstable; urgency=medium
* Add preview-first release workflow and extended packaging support (see ChatGPT conversation: https://chatgpt.com/share/693722b4-af9c-800f-bccc-8a4036e99630)

View File

@@ -31,7 +31,7 @@
rec {
pkgmgr = pyPkgs.buildPythonApplication {
pname = "package-manager";
version = "0.4.0";
version = "0.7.3";
# Use the git repo as source
src = ./.;

View File

@@ -1,5 +1,5 @@
Name: package-manager
Version: 0.4.0
Version: 0.7.3
Release: 1%{?dist}
Summary: Wrapper that runs Kevin's package-manager via Nix flake
@@ -35,35 +35,36 @@ available on the system.
%install
rm -rf %{buildroot}
install -d %{buildroot}%{_bindir}
install -d %{buildroot}%{_libdir}/package-manager
# Install project tree into a fixed, architecture-independent location.
install -d %{buildroot}/usr/lib/package-manager
# Copy full project source into /usr/lib/package-manager
cp -a . %{buildroot}%{_libdir}/package-manager/
cp -a . %{buildroot}/usr/lib/package-manager/
# Wrapper
install -m0755 scripts/pkgmgr-wrapper.sh %{buildroot}%{_bindir}/pkgmgr
# Shared Nix init script (ensure it is executable in the installed tree)
install -m0755 scripts/init-nix.sh %{buildroot}%{_libdir}/package-manager/init-nix.sh
install -m0755 scripts/init-nix.sh %{buildroot}/usr/lib/package-manager/init-nix.sh
# Remove packaging-only and development artefacts from the installed tree
rm -rf \
%{buildroot}%{_libdir}/package-manager/PKGBUILD \
%{buildroot}%{_libdir}/package-manager/Dockerfile \
%{buildroot}%{_libdir}/package-manager/debian \
%{buildroot}%{_libdir}/package-manager/.git \
%{buildroot}%{_libdir}/package-manager/.github \
%{buildroot}%{_libdir}/package-manager/tests \
%{buildroot}%{_libdir}/package-manager/.gitignore \
%{buildroot}%{_libdir}/package-manager/__pycache__ \
%{buildroot}%{_libdir}/package-manager/.gitkeep || true
%{buildroot}/usr/lib/package-manager/PKGBUILD \
%{buildroot}/usr/lib/package-manager/Dockerfile \
%{buildroot}/usr/lib/package-manager/debian \
%{buildroot}/usr/lib/package-manager/.git \
%{buildroot}/usr/lib/package-manager/.github \
%{buildroot}/usr/lib/package-manager/tests \
%{buildroot}/usr/lib/package-manager/.gitignore \
%{buildroot}/usr/lib/package-manager/__pycache__ \
%{buildroot}/usr/lib/package-manager/.gitkeep || true
%post
# Initialize Nix (if needed) after installing the package-manager files.
if [ -x %{_libdir}/package-manager/init-nix.sh ]; then
%{_libdir}/package-manager/init-nix.sh || true
if [ -x /usr/lib/package-manager/init-nix.sh ]; then
/usr/lib/package-manager/init-nix.sh || true
else
echo ">>> Warning: %{_libdir}/package-manager/init-nix.sh not found or not executable."
echo ">>> Warning: /usr/lib/package-manager/init-nix.sh not found or not executable."
fi
%postun
@@ -73,8 +74,14 @@ echo ">>> package-manager removed. Nix itself was not removed."
%doc README.md
%license LICENSE
%{_bindir}/pkgmgr
%{_libdir}/package-manager/
/usr/lib/package-manager/
%changelog
* Tue Dec 09 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.3-1
- Fixed bug: Ignored packages are now ignored
* Tue Dec 09 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.2-1
- Implemented Changelog Support for Fedora and Debian
* Sat Dec 06 2025 Kevin Veen-Birkenbach <info@veen.world> - 0.1.1-1
- Initial RPM packaging for package-manager

View File

@@ -1,3 +1,4 @@
# pkgmgr/branch_commands.py
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
@@ -5,14 +6,14 @@
High-level helpers for branch-related operations.
This module encapsulates the actual Git logic so the CLI layer
(pkgmgr.cli_core.commands.branch) stays thin and testable.
(pkgmgr.cli.commands.branch) stays thin and testable.
"""
from __future__ import annotations
from typing import Optional
from pkgmgr.git_utils import run_git, GitError, get_current_branch
from pkgmgr.core.git import run_git, GitError, get_current_branch
def open_branch(

View File

@@ -13,7 +13,7 @@ from __future__ import annotations
from typing import Optional
from pkgmgr.git_utils import run_git, GitError
from pkgmgr.core.git import run_git, GitError
def generate_changelog(

View File

@@ -0,0 +1,181 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Initialize user configuration by scanning the repositories base directory.
This module scans the path:
defaults_config["directories"]["repositories"]
with the expected structure:
{base}/{provider}/{account}/{repository}
For each discovered repository, the function:
• derives provider, account, repository from the folder structure
• (optionally) determines the latest commit hash via git log
• generates a unique CLI alias
• marks ignore=True for newly discovered repos
• skips repos already known in defaults or user config
"""
from __future__ import annotations
import os
import subprocess
from typing import Any, Dict
from pkgmgr.core.command.alias import generate_alias
from pkgmgr.core.config.save import save_user_config
def config_init(
user_config: Dict[str, Any],
defaults_config: Dict[str, Any],
bin_dir: str,
user_config_path: str,
) -> None:
"""
Scan the repositories base directory and add missing entries
to the user configuration.
"""
# ------------------------------------------------------------
# Announce where we will write the result
# ------------------------------------------------------------
print("============================================================")
print(f"[INIT] Writing user configuration to:")
print(f" {user_config_path}")
print("============================================================")
repositories_base_dir = os.path.expanduser(
defaults_config["directories"]["repositories"]
)
print(f"[INIT] Scanning repository base directory:")
print(f" {repositories_base_dir}")
print("")
if not os.path.isdir(repositories_base_dir):
print(f"[ERROR] Base directory does not exist: {repositories_base_dir}")
return
default_keys = {
(entry.get("provider"), entry.get("account"), entry.get("repository"))
for entry in defaults_config.get("repositories", [])
}
existing_keys = {
(entry.get("provider"), entry.get("account"), entry.get("repository"))
for entry in user_config.get("repositories", [])
}
existing_aliases = {
entry.get("alias")
for entry in user_config.get("repositories", [])
if entry.get("alias")
}
new_entries = []
scanned = 0
skipped = 0
# ------------------------------------------------------------
# Actual scanning
# ------------------------------------------------------------
for provider in os.listdir(repositories_base_dir):
provider_path = os.path.join(repositories_base_dir, provider)
if not os.path.isdir(provider_path):
continue
print(f"[SCAN] Provider: {provider}")
for account in os.listdir(provider_path):
account_path = os.path.join(provider_path, account)
if not os.path.isdir(account_path):
continue
print(f"[SCAN] Account: {account}")
for repo_name in os.listdir(account_path):
repo_path = os.path.join(account_path, repo_name)
if not os.path.isdir(repo_path):
continue
scanned += 1
key = (provider, account, repo_name)
# Already known?
if key in default_keys:
skipped += 1
print(f"[SKIP] (defaults) {provider}/{account}/{repo_name}")
continue
if key in existing_keys:
skipped += 1
print(f"[SKIP] (user-config) {provider}/{account}/{repo_name}")
continue
print(f"[ADD] {provider}/{account}/{repo_name}")
# Determine commit hash
try:
result = subprocess.run(
["git", "log", "-1", "--format=%H"],
cwd=repo_path,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
check=True,
)
verified = result.stdout.strip()
print(f"[INFO] Latest commit: {verified}")
except Exception as exc:
verified = ""
print(f"[WARN] Could not read commit: {exc}")
entry = {
"provider": provider,
"account": account,
"repository": repo_name,
"verified": {"commit": verified},
"ignore": True,
}
# Alias generation
alias = generate_alias(
{
"repository": repo_name,
"provider": provider,
"account": account,
},
bin_dir,
existing_aliases,
)
entry["alias"] = alias
existing_aliases.add(alias)
print(f"[INFO] Alias generated: {alias}")
new_entries.append(entry)
print("") # blank line between accounts
# ------------------------------------------------------------
# Summary
# ------------------------------------------------------------
print("============================================================")
print(f"[DONE] Scanned repositories: {scanned}")
print(f"[DONE] Skipped (known): {skipped}")
print(f"[DONE] New entries discovered: {len(new_entries)}")
print("============================================================")
# ------------------------------------------------------------
# Save if needed
# ------------------------------------------------------------
if new_entries:
user_config.setdefault("repositories", []).extend(new_entries)
save_user_config(user_config, user_config_path)
print(f"[SAVE] Wrote user configuration to:")
print(f" {user_config_path}")
else:
print("[INFO] No new repositories were added.")
print("============================================================")

View File

@@ -1,5 +1,5 @@
import yaml
from .load_config import load_config
from pkgmgr.core.config.load import load_config
def show_config(selected_repos, user_config_path, full_config=False):
"""Display configuration for one or more repositories, or the entire merged config."""

View File

@@ -1,7 +1,7 @@
import os
from pkgmgr.get_repo_identifier import get_repo_identifier
from pkgmgr.get_repo_dir import get_repo_dir
from pkgmgr.run_command import run_command
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.command.run import run_command
import sys
def exec_proxy_command(proxy_prefix: str, selected_repos, repositories_base_dir, all_repos, proxy_command: str, extra_args, preview: bool):

View File

@@ -0,0 +1,310 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Release helper for pkgmgr (public entry point).
This package provides the high-level `release()` function used by the
pkgmgr CLI to perform versioned releases:
- Determine the next semantic version based on existing Git tags.
- Update pyproject.toml with the new version.
- Update additional packaging files (flake.nix, PKGBUILD,
debian/changelog, RPM spec) where present.
- Prepend a basic entry to CHANGELOG.md.
- Move the floating 'latest' tag to the newly created release tag so
the newest release is always marked as latest.
Additional behaviour:
- If `preview=True` (from --preview), no files are written and no
Git commands are executed. Instead, a detailed summary of the
planned changes and commands is printed.
- If `preview=False` and not forced, the release is executed in two
phases:
1) Preview-only run (dry-run).
2) Interactive confirmation, then real release if confirmed.
This confirmation can be skipped with the `force=True` flag.
- Before creating and pushing tags, main/master is updated from origin
when the release is performed on one of these branches.
- If `close=True` is used and the current branch is not main/master,
the branch will be closed via branch_commands.close_branch() after
a successful release.
"""
from __future__ import annotations
import os
import sys
from typing import Optional
from pkgmgr.core.git import get_current_branch, GitError
from pkgmgr.actions.branch import close_branch
from .versioning import determine_current_version, bump_semver
from .git_ops import run_git_command, sync_branch_with_remote, update_latest_tag
from .files import (
update_pyproject_version,
update_flake_version,
update_pkgbuild_version,
update_spec_version,
update_changelog,
update_debian_changelog,
update_spec_changelog,
)
# ---------------------------------------------------------------------------
# Internal implementation (single-phase, preview or real)
# ---------------------------------------------------------------------------
def _release_impl(
pyproject_path: str = "pyproject.toml",
changelog_path: str = "CHANGELOG.md",
release_type: str = "patch",
message: Optional[str] = None,
preview: bool = False,
close: bool = False,
) -> None:
"""
Internal implementation that performs a single-phase release.
"""
current_ver = determine_current_version()
new_ver = bump_semver(current_ver, release_type)
new_ver_str = str(new_ver)
new_tag = new_ver.to_tag(with_prefix=True)
mode = "PREVIEW" if preview else "REAL"
print(f"Release mode: {mode}")
print(f"Current version: {current_ver}")
print(f"New version: {new_ver_str} ({release_type})")
repo_root = os.path.dirname(os.path.abspath(pyproject_path))
# Update core project metadata and packaging files
update_pyproject_version(pyproject_path, new_ver_str, preview=preview)
changelog_message = update_changelog(
changelog_path,
new_ver_str,
message=message,
preview=preview,
)
flake_path = os.path.join(repo_root, "flake.nix")
update_flake_version(flake_path, new_ver_str, preview=preview)
pkgbuild_path = os.path.join(repo_root, "PKGBUILD")
update_pkgbuild_version(pkgbuild_path, new_ver_str, preview=preview)
spec_path = os.path.join(repo_root, "package-manager.spec")
update_spec_version(spec_path, new_ver_str, preview=preview)
# Determine a single effective_message to be reused across all
# changelog targets (project, Debian, Fedora).
effective_message: Optional[str] = message
if effective_message is None and isinstance(changelog_message, str):
if changelog_message.strip():
effective_message = changelog_message.strip()
debian_changelog_path = os.path.join(repo_root, "debian", "changelog")
package_name = os.path.basename(repo_root) or "package-manager"
# Debian changelog
update_debian_changelog(
debian_changelog_path,
package_name=package_name,
new_version=new_ver_str,
message=effective_message,
preview=preview,
)
# Fedora / RPM %changelog
update_spec_changelog(
spec_path=spec_path,
package_name=package_name,
new_version=new_ver_str,
message=effective_message,
preview=preview,
)
commit_msg = f"Release version {new_ver_str}"
tag_msg = effective_message or commit_msg
# Determine branch and ensure it is up to date if main/master
try:
branch = get_current_branch() or "main"
except GitError:
branch = "main"
print(f"Releasing on branch: {branch}")
# Ensure main/master are up-to-date from origin before creating and
# pushing tags. For other branches we only log the intent.
sync_branch_with_remote(branch, preview=preview)
files_to_add = [
pyproject_path,
changelog_path,
flake_path,
pkgbuild_path,
spec_path,
debian_changelog_path,
]
existing_files = [p for p in files_to_add if p and os.path.exists(p)]
if preview:
for path in existing_files:
print(f"[PREVIEW] Would run: git add {path}")
print(f'[PREVIEW] Would run: git commit -am "{commit_msg}"')
print(f'[PREVIEW] Would run: git tag -a {new_tag} -m "{tag_msg}"')
print(f"[PREVIEW] Would run: git push origin {branch}")
print("[PREVIEW] Would run: git push origin --tags")
# Also update the floating 'latest' tag to the new highest SemVer.
update_latest_tag(new_tag, preview=True)
if close and branch not in ("main", "master"):
print(
f"[PREVIEW] Would also close branch {branch} after the release "
"(close=True and branch is not main/master)."
)
elif close:
print(
f"[PREVIEW] close=True but current branch is {branch}; "
"no branch would be closed."
)
print("Preview completed. No changes were made.")
return
for path in existing_files:
run_git_command(f"git add {path}")
run_git_command(f'git commit -am "{commit_msg}"')
run_git_command(f'git tag -a {new_tag} -m "{tag_msg}"')
run_git_command(f"git push origin {branch}")
run_git_command("git push origin --tags")
# Move 'latest' to the new release tag so the newest SemVer is always
# marked as latest. This is best-effort and must not break the release.
try:
update_latest_tag(new_tag, preview=False)
except GitError as exc: # pragma: no cover
print(
f"[WARN] Failed to update floating 'latest' tag for {new_tag}: {exc}\n"
"[WARN] The release itself completed successfully; only the "
"'latest' tag was not updated."
)
print(f"Release {new_ver_str} completed.")
if close:
if branch in ("main", "master"):
print(
f"[INFO] close=True but current branch is {branch}; "
"nothing to close."
)
return
print(
f"[INFO] Closing branch {branch} after successful release "
"(close=True and branch is not main/master)..."
)
try:
close_branch(name=branch, base_branch="main", cwd=".")
except Exception as exc: # pragma: no cover
print(f"[WARN] Failed to close branch {branch} automatically: {exc}")
# ---------------------------------------------------------------------------
# Public release entry point
# ---------------------------------------------------------------------------
def release(
pyproject_path: str = "pyproject.toml",
changelog_path: str = "CHANGELOG.md",
release_type: str = "patch",
message: Optional[str] = None,
preview: bool = False,
force: bool = False,
close: bool = False,
) -> None:
"""
High-level release entry point.
Modes:
- preview=True:
* Single-phase PREVIEW only.
- preview=False, force=True:
* Single-phase REAL release, no interactive preview.
- preview=False, force=False:
* Two-phase flow (intended default for interactive CLI use).
"""
if preview:
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=True,
close=close,
)
return
if force:
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=False,
close=close,
)
return
if not sys.stdin.isatty():
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=False,
close=close,
)
return
print("[INFO] Running preview before actual release...\n")
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=True,
close=close,
)
try:
answer = input("Proceed with the actual release? [y/N]: ").strip().lower()
except (EOFError, KeyboardInterrupt):
print("\n[INFO] Release aborted (no confirmation).")
return
if answer not in ("y", "yes"):
print("Release aborted by user. No changes were made.")
return
print("\n[INFO] Running REAL release...\n")
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=False,
close=close,
)
__all__ = ["release"]

View File

@@ -0,0 +1,526 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
File and metadata update helpers for the release workflow.
Responsibilities:
- Update pyproject.toml with the new version.
- Update flake.nix, PKGBUILD, RPM spec files where present.
- Prepend release entries to CHANGELOG.md.
- Maintain distribution-specific changelog files:
* debian/changelog
* RPM spec %changelog section
including maintainer metadata where applicable.
"""
from __future__ import annotations
import os
import re
import subprocess
import sys
import tempfile
from datetime import date, datetime
from typing import Optional, Tuple
# ---------------------------------------------------------------------------
# Editor helper for interactive changelog messages
# ---------------------------------------------------------------------------
def _open_editor_for_changelog(initial_message: Optional[str] = None) -> str:
"""
Open $EDITOR (fallback 'nano') so the user can enter a changelog message.
The temporary file is pre-filled with commented instructions and an
optional initial_message. Lines starting with '#' are ignored when the
message is read back.
Returns the final message (may be empty string if user leaves it blank).
"""
editor = os.environ.get("EDITOR", "nano")
with tempfile.NamedTemporaryFile(
mode="w+",
delete=False,
encoding="utf-8",
) as tmp:
tmp_path = tmp.name
tmp.write(
"# Write the changelog entry for this release.\n"
"# Lines starting with '#' will be ignored.\n"
"# Empty result will fall back to a generic message.\n\n"
)
if initial_message:
tmp.write(initial_message.strip() + "\n")
tmp.flush()
try:
subprocess.call([editor, tmp_path])
except FileNotFoundError:
print(
f"[WARN] Editor {editor!r} not found; proceeding without "
"interactive changelog message."
)
try:
with open(tmp_path, "r", encoding="utf-8") as f:
content = f.read()
finally:
try:
os.remove(tmp_path)
except OSError:
pass
lines = [
line for line in content.splitlines()
if not line.strip().startswith("#")
]
return "\n".join(lines).strip()
# ---------------------------------------------------------------------------
# File update helpers (pyproject + extra packaging + changelog)
# ---------------------------------------------------------------------------
def update_pyproject_version(
pyproject_path: str,
new_version: str,
preview: bool = False,
) -> None:
"""
Update the version in pyproject.toml with the new version.
The function looks for a line matching:
version = "X.Y.Z"
and replaces the version part with the given new_version string.
"""
try:
with open(pyproject_path, "r", encoding="utf-8") as f:
content = f.read()
except FileNotFoundError:
print(f"[ERROR] pyproject.toml not found at: {pyproject_path}")
sys.exit(1)
pattern = r'^(version\s*=\s*")([^"]+)(")'
new_content, count = re.subn(
pattern,
lambda m: f'{m.group(1)}{new_version}{m.group(3)}',
content,
flags=re.MULTILINE,
)
if count == 0:
print("[ERROR] Could not find version line in pyproject.toml")
sys.exit(1)
if preview:
print(f"[PREVIEW] Would update pyproject.toml version to {new_version}")
return
with open(pyproject_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(f"Updated pyproject.toml version to {new_version}")
def update_flake_version(
flake_path: str,
new_version: str,
preview: bool = False,
) -> None:
"""
Update the version in flake.nix, if present.
"""
if not os.path.exists(flake_path):
print("[INFO] flake.nix not found, skipping.")
return
try:
with open(flake_path, "r", encoding="utf-8") as f:
content = f.read()
except Exception as exc:
print(f"[WARN] Could not read flake.nix: {exc}")
return
pattern = r'(version\s*=\s*")([^"]+)(")'
new_content, count = re.subn(
pattern,
lambda m: f'{m.group(1)}{new_version}{m.group(3)}',
content,
)
if count == 0:
print("[WARN] No version assignment found in flake.nix, skipping.")
return
if preview:
print(f"[PREVIEW] Would update flake.nix version to {new_version}")
return
with open(flake_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(f"Updated flake.nix version to {new_version}")
def update_pkgbuild_version(
pkgbuild_path: str,
new_version: str,
preview: bool = False,
) -> None:
"""
Update the version in PKGBUILD, if present.
Expects:
pkgver=1.2.3
pkgrel=1
"""
if not os.path.exists(pkgbuild_path):
print("[INFO] PKGBUILD not found, skipping.")
return
try:
with open(pkgbuild_path, "r", encoding="utf-8") as f:
content = f.read()
except Exception as exc:
print(f"[WARN] Could not read PKGBUILD: {exc}")
return
ver_pattern = r"^(pkgver\s*=\s*)(.+)$"
new_content, ver_count = re.subn(
ver_pattern,
lambda m: f"{m.group(1)}{new_version}",
content,
flags=re.MULTILINE,
)
if ver_count == 0:
print("[WARN] No pkgver line found in PKGBUILD.")
new_content = content
rel_pattern = r"^(pkgrel\s*=\s*)(.+)$"
new_content, rel_count = re.subn(
rel_pattern,
lambda m: f"{m.group(1)}1",
new_content,
flags=re.MULTILINE,
)
if rel_count == 0:
print("[WARN] No pkgrel line found in PKGBUILD.")
if preview:
print(f"[PREVIEW] Would update PKGBUILD to pkgver={new_version}, pkgrel=1")
return
with open(pkgbuild_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(f"Updated PKGBUILD to pkgver={new_version}, pkgrel=1")
def update_spec_version(
spec_path: str,
new_version: str,
preview: bool = False,
) -> None:
"""
Update the version in an RPM spec file, if present.
"""
if not os.path.exists(spec_path):
print("[INFO] RPM spec file not found, skipping.")
return
try:
with open(spec_path, "r", encoding="utf-8") as f:
content = f.read()
except Exception as exc:
print(f"[WARN] Could not read spec file: {exc}")
return
ver_pattern = r"^(Version:\s*)(.+)$"
new_content, ver_count = re.subn(
ver_pattern,
lambda m: f"{m.group(1)}{new_version}",
content,
flags=re.MULTILINE,
)
if ver_count == 0:
print("[WARN] No 'Version:' line found in spec file.")
rel_pattern = r"^(Release:\s*)(.+)$"
def _release_repl(m: re.Match[str]) -> str: # type: ignore[name-defined]
rest = m.group(2).strip()
match = re.match(r"^(\d+)(.*)$", rest)
if match:
suffix = match.group(2)
else:
suffix = ""
return f"{m.group(1)}1{suffix}"
new_content, rel_count = re.subn(
rel_pattern,
_release_repl,
new_content,
flags=re.MULTILINE,
)
if rel_count == 0:
print("[WARN] No 'Release:' line found in spec file.")
if preview:
print(
f"[PREVIEW] Would update spec file "
f"{os.path.basename(spec_path)} to Version: {new_version}, Release: 1..."
)
return
with open(spec_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(
f"Updated spec file {os.path.basename(spec_path)} "
f"to Version: {new_version}, Release: 1..."
)
def update_changelog(
changelog_path: str,
new_version: str,
message: Optional[str] = None,
preview: bool = False,
) -> str:
"""
Prepend a new release section to CHANGELOG.md with the new version,
current date, and a message.
"""
today = date.today().isoformat()
if message is None:
if preview:
message = "Automated release."
else:
print(
"\n[INFO] No release message provided, opening editor for "
"changelog entry...\n"
)
editor_message = _open_editor_for_changelog()
if not editor_message:
message = "Automated release."
else:
message = editor_message
header = f"## [{new_version}] - {today}\n"
header += f"\n* {message}\n\n"
if os.path.exists(changelog_path):
try:
with open(changelog_path, "r", encoding="utf-8") as f:
changelog = f.read()
except Exception as exc:
print(f"[WARN] Could not read existing CHANGELOG.md: {exc}")
changelog = ""
else:
changelog = ""
new_changelog = header + "\n" + changelog if changelog else header
print("\n================ CHANGELOG ENTRY ================")
print(header.rstrip())
print("=================================================\n")
if preview:
print(f"[PREVIEW] Would prepend new entry for {new_version} to CHANGELOG.md")
return message
with open(changelog_path, "w", encoding="utf-8") as f:
f.write(new_changelog)
print(f"Updated CHANGELOG.md with version {new_version}")
return message
# ---------------------------------------------------------------------------
# Debian changelog helpers (with Git config fallback for maintainer)
# ---------------------------------------------------------------------------
def _get_git_config_value(key: str) -> Optional[str]:
"""
Try to read a value from `git config --get <key>`.
"""
try:
result = subprocess.run(
["git", "config", "--get", key],
capture_output=True,
text=True,
check=False,
)
except Exception:
return None
value = result.stdout.strip()
return value or None
def _get_debian_author() -> Tuple[str, str]:
"""
Determine the maintainer name/email for debian/changelog entries.
"""
name = os.environ.get("DEBFULLNAME")
email = os.environ.get("DEBEMAIL")
if not name:
name = os.environ.get("GIT_AUTHOR_NAME")
if not email:
email = os.environ.get("GIT_AUTHOR_EMAIL")
if not name:
name = _get_git_config_value("user.name")
if not email:
email = _get_git_config_value("user.email")
if not name:
name = "Unknown Maintainer"
if not email:
email = "unknown@example.com"
return name, email
def update_debian_changelog(
debian_changelog_path: str,
package_name: str,
new_version: str,
message: Optional[str] = None,
preview: bool = False,
) -> None:
"""
Prepend a new entry to debian/changelog, if it exists.
"""
if not os.path.exists(debian_changelog_path):
print("[INFO] debian/changelog not found, skipping.")
return
debian_version = f"{new_version}-1"
now = datetime.now().astimezone()
date_str = now.strftime("%a, %d %b %Y %H:%M:%S %z")
author_name, author_email = _get_debian_author()
first_line = f"{package_name} ({debian_version}) unstable; urgency=medium"
body_line = message.strip() if message else f"Automated release {new_version}."
stanza = (
f"{first_line}\n\n"
f" * {body_line}\n\n"
f" -- {author_name} <{author_email}> {date_str}\n\n"
)
if preview:
print(
"[PREVIEW] Would prepend the following stanza to debian/changelog:\n"
f"{stanza}"
)
return
try:
with open(debian_changelog_path, "r", encoding="utf-8") as f:
existing = f.read()
except Exception as exc:
print(f"[WARN] Could not read debian/changelog: {exc}")
existing = ""
new_content = stanza + existing
with open(debian_changelog_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(f"Updated debian/changelog with version {debian_version}")
# ---------------------------------------------------------------------------
# Fedora / RPM spec %changelog helper
# ---------------------------------------------------------------------------
def update_spec_changelog(
spec_path: str,
package_name: str,
new_version: str,
message: Optional[str] = None,
preview: bool = False,
) -> None:
"""
Prepend a new entry to the %changelog section of an RPM spec file,
if present.
Typical RPM-style entry:
* Tue Dec 09 2025 John Doe <john@example.com> - 0.5.1-1
- Your changelog message
"""
if not os.path.exists(spec_path):
print("[INFO] RPM spec file not found, skipping spec changelog update.")
return
try:
with open(spec_path, "r", encoding="utf-8") as f:
content = f.read()
except Exception as exc:
print(f"[WARN] Could not read spec file for changelog update: {exc}")
return
debian_version = f"{new_version}-1"
now = datetime.now().astimezone()
date_str = now.strftime("%a %b %d %Y")
# Reuse Debian maintainer discovery for author name/email.
author_name, author_email = _get_debian_author()
body_line = message.strip() if message else f"Automated release {new_version}."
stanza = (
f"* {date_str} {author_name} <{author_email}> - {debian_version}\n"
f"- {body_line}\n\n"
)
marker = "%changelog"
idx = content.find(marker)
if idx == -1:
# No %changelog section yet: append one at the end.
new_content = content.rstrip() + "\n\n%changelog\n" + stanza
else:
# Insert stanza right after the %changelog line.
before = content[: idx + len(marker)]
after = content[idx + len(marker) :]
new_content = before + "\n" + stanza + after.lstrip("\n")
if preview:
print(
"[PREVIEW] Would update RPM %changelog section with the following "
"stanza:\n"
f"{stanza}"
)
return
try:
with open(spec_path, "w", encoding="utf-8") as f:
f.write(new_content)
except Exception as exc:
print(f"[WARN] Failed to write updated spec changelog section: {exc}")
return
print(
f"Updated RPM %changelog section in {os.path.basename(spec_path)} "
f"for {package_name} {debian_version}"
)

View File

@@ -0,0 +1,95 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Git-related helpers for the release workflow.
Responsibilities:
- Run Git (or shell) commands with basic error reporting.
- Ensure main/master are synchronized with origin before tagging.
- Maintain the floating 'latest' tag that always points to the newest
release tag.
"""
from __future__ import annotations
import subprocess
from pkgmgr.core.git import GitError
def run_git_command(cmd: str) -> None:
"""
Run a Git (or shell) command with basic error reporting.
The command is executed via the shell, primarily for readability
when printed (as in 'git commit -am "msg"').
"""
print(f"[GIT] {cmd}")
try:
subprocess.run(cmd, shell=True, check=True)
except subprocess.CalledProcessError as exc:
print(f"[ERROR] Git command failed: {cmd}")
print(f" Exit code: {exc.returncode}")
if exc.stdout:
print("--- stdout ---")
print(exc.stdout)
if exc.stderr:
print("--- stderr ---")
print(exc.stderr)
raise GitError(f"Git command failed: {cmd}") from exc
def sync_branch_with_remote(branch: str, preview: bool = False) -> None:
"""
Ensure the local main/master branch is up-to-date before tagging.
Behaviour:
- For main/master: run 'git fetch origin' and 'git pull origin <branch>'.
- For all other branches: only log that no automatic sync is performed.
"""
if branch not in ("main", "master"):
print(
f"[INFO] Skipping automatic git pull for non-main/master branch "
f"{branch}."
)
return
print(
f"[INFO] Updating branch {branch} from origin before creating tags..."
)
if preview:
print("[PREVIEW] Would run: git fetch origin")
print(f"[PREVIEW] Would run: git pull origin {branch}")
return
run_git_command("git fetch origin")
run_git_command(f"git pull origin {branch}")
def update_latest_tag(new_tag: str, preview: bool = False) -> None:
"""
Move the floating 'latest' tag to the newly created release tag.
Implementation details:
- We explicitly dereference the tag object via `<tag>^{}` so that
'latest' always points at the underlying commit, not at another tag.
- We create/update 'latest' as an annotated tag with a short message so
Git configurations that enforce annotated/signed tags do not fail
with "no tag message".
"""
target_ref = f"{new_tag}^{{}}"
print(f"[INFO] Updating 'latest' tag to point at {new_tag} (commit {target_ref})...")
if preview:
print(f"[PREVIEW] Would run: git tag -f -a latest {target_ref} "
f'-m "Floating latest tag for {new_tag}"')
print("[PREVIEW] Would run: git push origin latest --force")
return
run_git_command(
f'git tag -f -a latest {target_ref} '
f'-m "Floating latest tag for {new_tag}"'
)
run_git_command("git push origin latest --force")

View File

@@ -0,0 +1,53 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Version discovery and bumping helpers for the release workflow.
"""
from __future__ import annotations
from pkgmgr.core.git import get_tags
from pkgmgr.core.version.semver import (
SemVer,
find_latest_version,
bump_major,
bump_minor,
bump_patch,
)
def determine_current_version() -> SemVer:
"""
Determine the current semantic version from Git tags.
Behaviour:
- If there are no tags or no SemVer-compatible tags, return 0.0.0.
- Otherwise, use the latest SemVer tag as current version.
"""
tags = get_tags()
if not tags:
return SemVer(0, 0, 0)
latest = find_latest_version(tags)
if latest is None:
return SemVer(0, 0, 0)
_tag, ver = latest
return ver
def bump_semver(current: SemVer, release_type: str) -> SemVer:
"""
Bump the given SemVer according to the release type.
release_type must be one of: "major", "minor", "patch".
"""
if release_type == "major":
return bump_major(current)
if release_type == "minor":
return bump_minor(current)
if release_type == "patch":
return bump_patch(current)
raise ValueError(f"Unknown release type: {release_type!r}")

View File

View File

@@ -1,8 +1,8 @@
import subprocess
import os
from pkgmgr.get_repo_dir import get_repo_dir
from pkgmgr.get_repo_identifier import get_repo_identifier
from pkgmgr.verify import verify_repository
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.repository.verify import verify_repository
def clone_repos(
selected_repos,

View File

@@ -2,8 +2,8 @@ import os
import subprocess
import sys
import yaml
from pkgmgr.generate_alias import generate_alias
from pkgmgr.save_user_config import save_user_config
from pkgmgr.core.command.alias import generate_alias
from pkgmgr.core.config.save import save_user_config
def create_repo(identifier, config_merged, user_config_path, bin_dir, remote=False, preview=False):
"""

View File

@@ -1,7 +1,7 @@
import os
import sys
from pkgmgr.get_repo_identifier import get_repo_identifier
from pkgmgr.get_repo_dir import get_repo_dir
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.repository.dir import get_repo_dir
def deinstall_repos(selected_repos, repositories_base_dir, bin_dir, all_repos, preview=False):
for repo in selected_repos:

View File

@@ -1,7 +1,7 @@
import shutil
import os
from pkgmgr.get_repo_identifier import get_repo_identifier
from pkgmgr.get_repo_dir import get_repo_dir
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.repository.dir import get_repo_dir
def delete_repos(selected_repos, repositories_base_dir, all_repos, preview=False):
for repo in selected_repos:

View File

@@ -21,23 +21,23 @@ focused installer classes.
import os
from typing import List, Dict, Any
from pkgmgr.get_repo_identifier import get_repo_identifier
from pkgmgr.get_repo_dir import get_repo_dir
from pkgmgr.create_ink import create_ink
from pkgmgr.verify import verify_repository
from pkgmgr.clone_repos import clone_repos
from pkgmgr.context import RepoContext
from pkgmgr.resolve_command import resolve_command_for_repo
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.command.ink import create_ink
from pkgmgr.core.repository.verify import verify_repository
from pkgmgr.actions.repository.clone import clone_repos
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.core.command.resolve import resolve_command_for_repo
# Installer implementations
from pkgmgr.installers.os_packages import (
from pkgmgr.actions.repository.install.installers.os_packages import (
ArchPkgbuildInstaller,
DebianControlInstaller,
RpmSpecInstaller,
)
from pkgmgr.installers.nix_flake import NixFlakeInstaller
from pkgmgr.installers.python import PythonInstaller
from pkgmgr.installers.makefile import MakefileInstaller
from pkgmgr.actions.repository.install.installers.nix_flake import NixFlakeInstaller
from pkgmgr.actions.repository.install.installers.python import PythonInstaller
from pkgmgr.actions.repository.install.installers.makefile import MakefileInstaller
# Layering:

View File

@@ -38,7 +38,7 @@ from abc import ABC, abstractmethod
from typing import Iterable, TYPE_CHECKING
if TYPE_CHECKING:
from pkgmgr.context import RepoContext
from pkgmgr.actions.repository.install.context import RepoContext
# ---------------------------------------------------------------------------

View File

@@ -0,0 +1,19 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Installer package for pkgmgr.
This exposes all installer classes so users can import them directly from
pkgmgr.actions.repository.install.installers.
"""
from pkgmgr.actions.repository.install.installers.base import BaseInstaller # noqa: F401
from pkgmgr.actions.repository.install.installers.nix_flake import NixFlakeInstaller # noqa: F401
from pkgmgr.actions.repository.install.installers.python import PythonInstaller # noqa: F401
from pkgmgr.actions.repository.install.installers.makefile import MakefileInstaller # noqa: F401
# OS-specific installers
from pkgmgr.actions.repository.install.installers.os_packages.arch_pkgbuild import ArchPkgbuildInstaller # noqa: F401
from pkgmgr.actions.repository.install.installers.os_packages.debian_control import DebianControlInstaller # noqa: F401
from pkgmgr.actions.repository.install.installers.os_packages.rpm_spec import RpmSpecInstaller # noqa: F401

View File

@@ -8,8 +8,8 @@ Base interface for all installer components in the pkgmgr installation pipeline.
from abc import ABC, abstractmethod
from typing import Set
from pkgmgr.context import RepoContext
from pkgmgr.capabilities import CAPABILITY_MATCHERS
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.repository.install.capabilities import CAPABILITY_MATCHERS
class BaseInstaller(ABC):

View File

@@ -12,9 +12,9 @@ installation step.
import os
import re
from pkgmgr.context import RepoContext
from pkgmgr.installers.base import BaseInstaller
from pkgmgr.run_command import run_command
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.repository.install.installers.base import BaseInstaller
from pkgmgr.core.command.run import run_command
class MakefileInstaller(BaseInstaller):

View File

@@ -19,12 +19,12 @@ import os
import shutil
from typing import TYPE_CHECKING
from pkgmgr.installers.base import BaseInstaller
from pkgmgr.run_command import run_command
from pkgmgr.actions.repository.install.installers.base import BaseInstaller
from pkgmgr.core.command.run import run_command
if TYPE_CHECKING:
from pkgmgr.context import RepoContext
from pkgmgr.install_repos import InstallContext
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.repository.install import InstallContext
class NixFlakeInstaller(BaseInstaller):

View File

@@ -3,9 +3,9 @@
import os
import shutil
from pkgmgr.context import RepoContext
from pkgmgr.installers.base import BaseInstaller
from pkgmgr.run_command import run_command
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.repository.install.installers.base import BaseInstaller
from pkgmgr.core.command.run import run_command
class ArchPkgbuildInstaller(BaseInstaller):

View File

@@ -20,9 +20,9 @@ import shutil
from typing import List
from pkgmgr.context import RepoContext
from pkgmgr.installers.base import BaseInstaller
from pkgmgr.run_command import run_command
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.repository.install.installers.base import BaseInstaller
from pkgmgr.core.command.run import run_command
class DebianControlInstaller(BaseInstaller):

View File

@@ -19,9 +19,9 @@ import shutil
from typing import List, Optional
from pkgmgr.context import RepoContext
from pkgmgr.installers.base import BaseInstaller
from pkgmgr.run_command import run_command
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.repository.install.installers.base import BaseInstaller
from pkgmgr.core.command.run import run_command
class RpmSpecInstaller(BaseInstaller):

View File

@@ -17,8 +17,8 @@ All installation failures are treated as fatal errors (SystemExit).
import os
import sys
from pkgmgr.installers.base import BaseInstaller
from pkgmgr.run_command import run_command
from pkgmgr.actions.repository.install.installers.base import BaseInstaller
from pkgmgr.core.command.run import run_command
class PythonInstaller(BaseInstaller):

View File

@@ -0,0 +1,352 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Pretty-print repository list with status, categories, tags and path.
- Tags come exclusively from YAML: repo["tags"].
- Categories come from repo["category_files"] (YAML file names without
.yml/.yaml) and optional repo["category"].
- Optional detail mode (--description) prints an extended section per
repository with description, homepage, etc.
"""
from __future__ import annotations
import os
import re
from textwrap import wrap
from typing import Any, Dict, List, Optional
Repository = Dict[str, Any]
RESET = "\033[0m"
BOLD = "\033[1m"
DIM = "\033[2m"
GREEN = "\033[32m"
YELLOW = "\033[33m"
RED = "\033[31m"
MAGENTA = "\033[35m"
GREY = "\033[90m"
def _compile_maybe_regex(pattern: str) -> Optional[re.Pattern[str]]:
"""
If pattern is of the form /.../, return a compiled regex (case-insensitive).
Otherwise return None.
"""
if not pattern:
return None
if len(pattern) >= 2 and pattern.startswith("/") and pattern.endswith("/"):
try:
return re.compile(pattern[1:-1], re.IGNORECASE)
except re.error:
return None
return None
def _status_matches(status: str, status_filter: str) -> bool:
"""
Match a status string against an optional filter (substring or /regex/).
"""
if not status_filter:
return True
regex = _compile_maybe_regex(status_filter)
if regex:
return bool(regex.search(status))
return status_filter.lower() in status.lower()
def _compute_repo_dir(repositories_base_dir: str, repo: Repository) -> str:
"""
Compute the local directory for a repository.
If the repository already has a 'directory' key, that is used;
otherwise the path is constructed from provider/account/repository
under repositories_base_dir.
"""
if repo.get("directory"):
return os.path.expanduser(str(repo["directory"]))
provider = str(repo.get("provider", ""))
account = str(repo.get("account", ""))
repository = str(repo.get("repository", ""))
return os.path.join(
os.path.expanduser(repositories_base_dir),
provider,
account,
repository,
)
def _compute_status(
repo: Repository,
repo_dir: str,
binaries_dir: str,
) -> str:
"""
Compute a human-readable status string, e.g. 'present,alias,ignored'.
"""
parts: List[str] = []
exists = os.path.isdir(repo_dir)
if exists:
parts.append("present")
else:
parts.append("absent")
alias = repo.get("alias")
if alias:
alias_path = os.path.join(os.path.expanduser(binaries_dir), str(alias))
if os.path.exists(alias_path):
parts.append("alias")
else:
parts.append("alias-missing")
if repo.get("ignore"):
parts.append("ignored")
return ",".join(parts) if parts else "-"
def _color_status(status_padded: str) -> str:
"""
Color individual status flags inside a padded status string.
Input is expected to be right-padded to the column width.
Color mapping:
- present -> green
- absent -> red
- alias -> red
- alias-missing -> red
- ignored -> magenta
- other -> default
"""
core = status_padded.rstrip()
pad_spaces = len(status_padded) - len(core)
plain_parts = core.split(",") if core else []
colored_parts: List[str] = []
for raw_part in plain_parts:
name = raw_part.strip()
if not name:
continue
if name == "present":
color = GREEN
elif name == "absent":
color = MAGENTA
elif name in ("alias", "alias-missing"):
color = YELLOW
elif name == "ignored":
color = MAGENTA
else:
color = ""
if color:
colored_parts.append(f"{color}{name}{RESET}")
else:
colored_parts.append(name)
colored_core = ",".join(colored_parts)
return colored_core + (" " * pad_spaces)
def list_repositories(
repositories: List[Repository],
repositories_base_dir: str,
binaries_dir: str,
search_filter: str = "",
status_filter: str = "",
extra_tags: Optional[List[str]] = None,
show_description: bool = False,
) -> None:
"""
Print a table of repositories and (optionally) detailed descriptions.
Parameters
----------
repositories:
Repositories to show (usually already filtered by get_selected_repos).
repositories_base_dir:
Base directory where repositories live.
binaries_dir:
Directory where alias symlinks live.
search_filter:
Optional substring/regex filter on identifier and metadata.
status_filter:
Optional filter on computed status.
extra_tags:
Additional tags to show for each repository (CLI overlay only).
show_description:
If True, print a detailed block for each repository after the table.
"""
if extra_tags is None:
extra_tags = []
search_regex = _compile_maybe_regex(search_filter)
rows: List[Dict[str, Any]] = []
# ------------------------------------------------------------------
# Build rows
# ------------------------------------------------------------------
for repo in repositories:
identifier = str(repo.get("repository") or repo.get("alias") or "")
alias = str(repo.get("alias") or "")
provider = str(repo.get("provider") or "")
account = str(repo.get("account") or "")
description = str(repo.get("description") or "")
homepage = str(repo.get("homepage") or "")
repo_dir = _compute_repo_dir(repositories_base_dir, repo)
status = _compute_status(repo, repo_dir, binaries_dir)
if not _status_matches(status, status_filter):
continue
if search_filter:
haystack = " ".join(
[
identifier,
alias,
provider,
account,
description,
homepage,
repo_dir,
]
)
if search_regex:
if not search_regex.search(haystack):
continue
else:
if search_filter.lower() not in haystack.lower():
continue
categories: List[str] = []
categories.extend(map(str, repo.get("category_files", [])))
if repo.get("category"):
categories.append(str(repo["category"]))
yaml_tags: List[str] = list(map(str, repo.get("tags", [])))
display_tags: List[str] = sorted(
set(yaml_tags + list(map(str, extra_tags)))
)
rows.append(
{
"repo": repo,
"identifier": identifier,
"status": status,
"categories": categories,
"tags": display_tags,
"dir": repo_dir,
}
)
if not rows:
print("No repositories matched the given filters.")
return
# ------------------------------------------------------------------
# Table section (header grey, values white, per-flag colored status)
# ------------------------------------------------------------------
ident_width = max(len("IDENTIFIER"), max(len(r["identifier"]) for r in rows))
status_width = max(len("STATUS"), max(len(r["status"]) for r in rows))
cat_width = max(
len("CATEGORIES"),
max((len(",".join(r["categories"])) for r in rows), default=0),
)
tag_width = max(
len("TAGS"),
max((len(",".join(r["tags"])) for r in rows), default=0),
)
header = (
f"{GREY}{BOLD}"
f"{'IDENTIFIER'.ljust(ident_width)} "
f"{'STATUS'.ljust(status_width)} "
f"{'CATEGORIES'.ljust(cat_width)} "
f"{'TAGS'.ljust(tag_width)} "
f"DIR"
f"{RESET}"
)
print(header)
print("-" * (ident_width + status_width + cat_width + tag_width + 10 + 40))
for r in rows:
ident_col = r["identifier"].ljust(ident_width)
cat_col = ",".join(r["categories"]).ljust(cat_width)
tag_col = ",".join(r["tags"]).ljust(tag_width)
dir_col = r["dir"]
status = r["status"]
status_padded = status.ljust(status_width)
status_colored = _color_status(status_padded)
print(
f"{ident_col} "
f"{status_colored} "
f"{cat_col} "
f"{tag_col} "
f"{dir_col}"
)
# ------------------------------------------------------------------
# Detailed section (alias value red, same status coloring)
# ------------------------------------------------------------------
if not show_description:
return
print()
print(f"{BOLD}Detailed repository information:{RESET}")
print()
for r in rows:
repo = r["repo"]
identifier = r["identifier"]
alias = str(repo.get("alias") or "")
provider = str(repo.get("provider") or "")
account = str(repo.get("account") or "")
repository = str(repo.get("repository") or "")
description = str(repo.get("description") or "")
homepage = str(repo.get("homepage") or "")
categories = r["categories"]
tags = r["tags"]
repo_dir = r["dir"]
status = r["status"]
print(f"{BOLD}{identifier}{RESET}")
print(f" Provider: {provider}")
print(f" Account: {account}")
print(f" Repository: {repository}")
# Alias value highlighted in red
if alias:
print(f" Alias: {RED}{alias}{RESET}")
status_colored = _color_status(status)
print(f" Status: {status_colored}")
if categories:
print(f" Categories: {', '.join(categories)}")
if tags:
print(f" Tags: {', '.join(tags)}")
print(f" Directory: {repo_dir}")
if homepage:
print(f" Homepage: {homepage}")
if description:
print(" Description:")
for line in wrap(description, width=78):
print(f" {line}")
print()

View File

@@ -1,9 +1,9 @@
import os
import subprocess
import sys
from pkgmgr.get_repo_identifier import get_repo_identifier
from pkgmgr.get_repo_dir import get_repo_dir
from pkgmgr.verify import verify_repository
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.repository.verify import verify_repository
def pull_with_verification(
selected_repos,

View File

@@ -1,9 +1,9 @@
import sys
import shutil
from .exec_proxy_command import exec_proxy_command
from .run_command import run_command
from .get_repo_identifier import get_repo_identifier
from pkgmgr.actions.proxy import exec_proxy_command
from pkgmgr.core.command.run import run_command
from pkgmgr.core.repository.identifier import get_repo_identifier
def status_repos(

View File

@@ -1,8 +1,8 @@
import sys
import shutil
from pkgmgr.pull_with_verification import pull_with_verification
from pkgmgr.install_repos import install_repos
from pkgmgr.actions.repository.pull import pull_with_verification
from pkgmgr.actions.repository.install import install_repos
def update_repos(
@@ -54,7 +54,7 @@ def update_repos(
)
if system_update:
from pkgmgr.run_command import run_command
from pkgmgr.core.command.run import run_command
# Nix: upgrade all profile entries (if Nix is available)
if shutil.which("nix") is not None:

43
pkgmgr/cli.py → pkgmgr/cli/__init__.py Executable file → Normal file
View File

@@ -1,17 +1,20 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import os
import sys
from pkgmgr.load_config import load_config
from pkgmgr.cli_core import CLIContext, create_parser, dispatch_command
from pkgmgr.core.config.load import load_config
# Define configuration file paths.
PROJECT_ROOT = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
USER_CONFIG_PATH = os.path.join(PROJECT_ROOT, "config", "config.yaml")
from .context import CLIContext
from .parser import create_parser
from .dispatch import dispatch_command
__all__ = ["CLIContext", "create_parser", "dispatch_command", "main"]
# User config lives in the home directory:
# ~/.config/pkgmgr/config.yaml
USER_CONFIG_PATH = os.path.expanduser("~/.config/pkgmgr/config.yaml")
DESCRIPTION_TEXT = """\
\033[1;32mPackage Manager 🤖📦\033[0m
@@ -63,20 +66,31 @@ For detailed help on each command, use:
def main() -> None:
# Load merged configuration
"""
Entry point for the pkgmgr CLI.
"""
config_merged = load_config(USER_CONFIG_PATH)
repositories_base_dir = os.path.expanduser(
config_merged["directories"]["repositories"]
# Directories: be robust and provide sane defaults if missing
directories = config_merged.get("directories") or {}
repositories_dir = os.path.expanduser(
directories.get("repositories", "~/Repositories")
)
binaries_dir = os.path.expanduser(
config_merged["directories"]["binaries"]
directories.get("binaries", "~/.local/bin")
)
all_repositories = config_merged["repositories"]
# Ensure the merged config actually contains the resolved directories
config_merged.setdefault("directories", {})
config_merged["directories"]["repositories"] = repositories_dir
config_merged["directories"]["binaries"] = binaries_dir
all_repositories = config_merged.get("repositories", [])
ctx = CLIContext(
config_merged=config_merged,
repositories_base_dir=repositories_base_dir,
repositories_base_dir=repositories_dir,
all_repositories=all_repositories,
binaries_dir=binaries_dir,
user_config_path=USER_CONFIG_PATH,
@@ -85,7 +99,6 @@ def main() -> None:
parser = create_parser(DESCRIPTION_TEXT)
args = parser.parse_args()
# If no subcommand is provided, show help
if not getattr(args, "command", None):
parser.print_help()
return

View File

@@ -2,8 +2,8 @@ from __future__ import annotations
import sys
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.branch_commands import open_branch
from pkgmgr.cli.context import CLIContext
from pkgmgr.actions.branch import open_branch, close_branch
def handle_branch(args, ctx: CLIContext) -> None:
@@ -11,7 +11,8 @@ def handle_branch(args, ctx: CLIContext) -> None:
Handle `pkgmgr branch` subcommands.
Currently supported:
- pkgmgr branch open [<name>] [--base <branch>]
- pkgmgr branch open [<name>] [--base <branch>]
- pkgmgr branch close [<name>] [--base <branch>]
"""
if args.subcommand == "open":
open_branch(
@@ -21,5 +22,13 @@ def handle_branch(args, ctx: CLIContext) -> None:
)
return
if args.subcommand == "close":
close_branch(
name=getattr(args, "name", None),
base_branch=getattr(args, "base", "main"),
cwd=".",
)
return
print(f"Unknown branch subcommand: {args.subcommand}")
sys.exit(2)

View File

@@ -4,12 +4,12 @@ import os
import sys
from typing import Any, Dict, List, Optional, Tuple
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.get_repo_dir import get_repo_dir
from pkgmgr.get_repo_identifier import get_repo_identifier
from pkgmgr.git_utils import get_tags
from pkgmgr.versioning import SemVer, extract_semver_from_tags
from pkgmgr.changelog import generate_changelog
from pkgmgr.cli.context import CLIContext
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.git import get_tags
from pkgmgr.core.version.semver import SemVer, extract_semver_from_tags
from pkgmgr.actions.changelog import generate_changelog
Repository = Dict[str, Any]

View File

@@ -0,0 +1,240 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import os
import sys
import shutil
from pathlib import Path
from typing import Any, Dict
import yaml
from pkgmgr.cli.context import CLIContext
from pkgmgr.actions.config.init import config_init
from pkgmgr.actions.config.add import interactive_add
from pkgmgr.core.repository.resolve import resolve_repos
from pkgmgr.core.config.save import save_user_config
from pkgmgr.actions.config.show import show_config
from pkgmgr.core.command.run import run_command
def _load_user_config(user_config_path: str) -> Dict[str, Any]:
"""
Load the user config from ~/.config/pkgmgr/config.yaml
(or whatever ctx.user_config_path is), creating the directory if needed.
"""
user_config_path_expanded = os.path.expanduser(user_config_path)
cfg_dir = os.path.dirname(user_config_path_expanded)
if cfg_dir and not os.path.isdir(cfg_dir):
os.makedirs(cfg_dir, exist_ok=True)
if os.path.exists(user_config_path_expanded):
with open(user_config_path_expanded, "r", encoding="utf-8") as f:
return yaml.safe_load(f) or {"repositories": []}
return {"repositories": []}
def _find_defaults_source_dir() -> str | None:
"""
Find the directory inside the installed pkgmgr package OR the
project root that contains default config files.
Preferred locations (in dieser Reihenfolge):
- <pkg_root>/config_defaults
- <pkg_root>/config
- <project_root>/config_defaults
- <project_root>/config
"""
import pkgmgr # local import to avoid circular deps
pkg_root = Path(pkgmgr.__file__).resolve().parent
project_root = pkg_root.parent
candidates = [
pkg_root / "config_defaults",
pkg_root / "config",
project_root / "config_defaults",
project_root / "config",
]
for cand in candidates:
if cand.is_dir():
return str(cand)
return None
def _update_default_configs(user_config_path: str) -> None:
"""
Copy all default *.yml/*.yaml files from the installed pkgmgr package
into ~/.config/pkgmgr/, overwriting existing ones except the user
config file itself (config.yaml), which is never touched.
"""
source_dir = _find_defaults_source_dir()
if not source_dir:
print(
"[WARN] No config_defaults or config directory found in "
"pkgmgr installation. Nothing to update."
)
return
dest_dir = os.path.dirname(os.path.expanduser(user_config_path))
if not dest_dir:
dest_dir = os.path.expanduser("~/.config/pkgmgr")
os.makedirs(dest_dir, exist_ok=True)
for name in os.listdir(source_dir):
lower = name.lower()
if not (lower.endswith(".yml") or lower.endswith(".yaml")):
continue
if name == "config.yaml":
# Never overwrite the user config template / live config
continue
src = os.path.join(source_dir, name)
dst = os.path.join(dest_dir, name)
shutil.copy2(src, dst)
print(f"[INFO] Updated default config file: {dst}")
def handle_config(args, ctx: CLIContext) -> None:
"""
Handle 'pkgmgr config' subcommands.
"""
user_config_path = ctx.user_config_path
# ------------------------------------------------------------
# config show
# ------------------------------------------------------------
if args.subcommand == "show":
if args.all or (not args.identifiers):
# Full merged config view
show_config([], user_config_path, full_config=True)
else:
# Show only matching entries from user config
user_config = _load_user_config(user_config_path)
selected = resolve_repos(
args.identifiers,
user_config.get("repositories", []),
)
if selected:
show_config(
selected,
user_config_path,
full_config=False,
)
return
# ------------------------------------------------------------
# config add
# ------------------------------------------------------------
if args.subcommand == "add":
interactive_add(ctx.config_merged, user_config_path)
return
# ------------------------------------------------------------
# config edit
# ------------------------------------------------------------
if args.subcommand == "edit":
run_command(f"nano {user_config_path}")
return
# ------------------------------------------------------------
# config init
# ------------------------------------------------------------
if args.subcommand == "init":
user_config = _load_user_config(user_config_path)
config_init(
user_config,
ctx.config_merged,
ctx.binaries_dir,
user_config_path,
)
return
# ------------------------------------------------------------
# config delete
# ------------------------------------------------------------
if args.subcommand == "delete":
user_config = _load_user_config(user_config_path)
if args.all or not args.identifiers:
print(
"[ERROR] 'config delete' requires explicit identifiers. "
"Use 'config show' to inspect entries."
)
return
to_delete = resolve_repos(
args.identifiers,
user_config.get("repositories", []),
)
new_repos = [
entry
for entry in user_config.get("repositories", [])
if entry not in to_delete
]
user_config["repositories"] = new_repos
save_user_config(user_config, user_config_path)
print(f"Deleted {len(to_delete)} entries from user config.")
return
# ------------------------------------------------------------
# config ignore
# ------------------------------------------------------------
if args.subcommand == "ignore":
user_config = _load_user_config(user_config_path)
if args.all or not args.identifiers:
print(
"[ERROR] 'config ignore' requires explicit identifiers. "
"Use 'config show' to inspect entries."
)
return
to_modify = resolve_repos(
args.identifiers,
user_config.get("repositories", []),
)
for entry in user_config["repositories"]:
key = (
entry.get("provider"),
entry.get("account"),
entry.get("repository"),
)
for mod in to_modify:
mod_key = (
mod.get("provider"),
mod.get("account"),
mod.get("repository"),
)
if key == mod_key:
entry["ignore"] = args.set == "true"
print(
f"Set ignore for {key} to {entry['ignore']}"
)
save_user_config(user_config, user_config_path)
return
# ------------------------------------------------------------
# config update
# ------------------------------------------------------------
if args.subcommand == "update":
"""
Copy default YAML configs from the installed package into the
user's ~/.config/pkgmgr directory.
This will overwrite files with the same name (except config.yaml).
"""
_update_default_configs(user_config_path)
return
# ------------------------------------------------------------
# Unknown subcommand
# ------------------------------------------------------------
print(f"Unknown config subcommand: {args.subcommand}")
sys.exit(2)

View File

@@ -3,8 +3,8 @@ from __future__ import annotations
import sys
from typing import Any, Dict, List
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.exec_proxy_command import exec_proxy_command
from pkgmgr.cli.context import CLIContext
from pkgmgr.actions.proxy import exec_proxy_command
Repository = Dict[str, Any]

View File

@@ -0,0 +1,98 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Release command wiring for the pkgmgr CLI.
This module implements the `pkgmgr release` subcommand on top of the
generic selection logic from cli.dispatch. It does not define its
own subparser; the CLI surface is configured in cli.parser.
Responsibilities:
- Take the parsed argparse.Namespace for the `release` command.
- Use the list of selected repositories provided by dispatch_command().
- Optionally list affected repositories when --list is set.
- For each selected repository, run pkgmgr.actions.release.release(...) in
the context of that repository directory.
"""
from __future__ import annotations
import os
from typing import Any, Dict, List
from pkgmgr.cli.context import CLIContext
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.actions.release import release as run_release
Repository = Dict[str, Any]
def handle_release(
args,
ctx: CLIContext,
selected: List[Repository],
) -> None:
"""
Handle the `pkgmgr release` subcommand.
Flow:
1) Use the `selected` repositories as computed by dispatch_command().
2) If --list is given, print the identifiers of the selected repos
and return without running any release.
3) For each selected repository:
- Resolve its identifier and local directory.
- Change into that directory.
- Call pkgmgr.actions.release.release(...) with the parsed options.
"""
if not selected:
print("[pkgmgr] No repositories selected for release.")
return
# List-only mode: show which repositories would be affected.
if getattr(args, "list", False):
print("[pkgmgr] Repositories that would be affected by this release:")
for repo in selected:
identifier = get_repo_identifier(repo, ctx.all_repositories)
print(f" - {identifier}")
return
for repo in selected:
identifier = get_repo_identifier(repo, ctx.all_repositories)
repo_dir = repo.get("directory")
if not repo_dir:
try:
repo_dir = get_repo_dir(ctx.repositories_base_dir, repo)
except Exception:
repo_dir = None
if not repo_dir or not os.path.isdir(repo_dir):
print(
f"[WARN] Skipping repository {identifier}: "
"local directory does not exist."
)
continue
print(
f"[pkgmgr] Running release for repository {identifier} "
f"in '{repo_dir}'..."
)
# Change to repo directory and invoke the helper.
cwd_before = os.getcwd()
try:
os.chdir(repo_dir)
run_release(
pyproject_path="pyproject.toml",
changelog_path="CHANGELOG.md",
release_type=args.release_type,
message=args.message or None,
preview=getattr(args, "preview", False),
force=getattr(args, "force", False),
close=getattr(args, "close", False),
)
finally:
os.chdir(cwd_before)

View File

@@ -1,18 +1,21 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import sys
from typing import Any, Dict, List
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.install_repos import install_repos
from pkgmgr.deinstall_repos import deinstall_repos
from pkgmgr.delete_repos import delete_repos
from pkgmgr.update_repos import update_repos
from pkgmgr.status_repos import status_repos
from pkgmgr.list_repositories import list_repositories
from pkgmgr.run_command import run_command
from pkgmgr.create_repo import create_repo
from pkgmgr.cli.context import CLIContext
from pkgmgr.actions.repository.install import install_repos
from pkgmgr.actions.repository.deinstall import deinstall_repos
from pkgmgr.actions.repository.delete import delete_repos
from pkgmgr.actions.repository.update import update_repos
from pkgmgr.actions.repository.status import status_repos
from pkgmgr.actions.repository.list import list_repositories
from pkgmgr.core.command.run import run_command
from pkgmgr.actions.repository.create import create_repo
from pkgmgr.core.repository.selected import get_selected_repos
Repository = Dict[str, Any]
@@ -23,15 +26,12 @@ def handle_repos_command(
selected: List[Repository],
) -> None:
"""
Handle repository-related commands:
- install / update / deinstall / delete / status
- path / shell
- create / list
Handle core repository commands (install/update/deinstall/delete/.../list).
"""
# --------------------------------------------------------
# install / update
# --------------------------------------------------------
# ------------------------------------------------------------
# install
# ------------------------------------------------------------
if args.command == "install":
install_repos(
selected,
@@ -46,6 +46,9 @@ def handle_repos_command(
)
return
# ------------------------------------------------------------
# update
# ------------------------------------------------------------
if args.command == "update":
update_repos(
selected,
@@ -61,9 +64,9 @@ def handle_repos_command(
)
return
# --------------------------------------------------------
# deinstall / delete
# --------------------------------------------------------
# ------------------------------------------------------------
# deinstall
# ------------------------------------------------------------
if args.command == "deinstall":
deinstall_repos(
selected,
@@ -74,6 +77,9 @@ def handle_repos_command(
)
return
# ------------------------------------------------------------
# delete
# ------------------------------------------------------------
if args.command == "delete":
delete_repos(
selected,
@@ -83,9 +89,9 @@ def handle_repos_command(
)
return
# --------------------------------------------------------
# ------------------------------------------------------------
# status
# --------------------------------------------------------
# ------------------------------------------------------------
if args.command == "status":
status_repos(
selected,
@@ -98,20 +104,20 @@ def handle_repos_command(
)
return
# --------------------------------------------------------
# ------------------------------------------------------------
# path
# --------------------------------------------------------
# ------------------------------------------------------------
if args.command == "path":
for repository in selected:
print(repository["directory"])
return
# --------------------------------------------------------
# ------------------------------------------------------------
# shell
# --------------------------------------------------------
# ------------------------------------------------------------
if args.command == "shell":
if not args.shell_command:
print("No shell command specified.")
print("[ERROR] 'shell' requires a command via -c/--command.")
sys.exit(2)
command_to_run = " ".join(args.shell_command)
for repository in selected:
@@ -125,13 +131,13 @@ def handle_repos_command(
)
return
# --------------------------------------------------------
# ------------------------------------------------------------
# create
# --------------------------------------------------------
# ------------------------------------------------------------
if args.command == "create":
if not args.identifiers:
print(
"No identifiers provided. Please specify at least one identifier "
"[ERROR] 'create' requires at least one identifier "
"in the format provider/account/repository."
)
sys.exit(1)
@@ -147,15 +153,19 @@ def handle_repos_command(
)
return
# --------------------------------------------------------
# ------------------------------------------------------------
# list
# --------------------------------------------------------
# ------------------------------------------------------------
if args.command == "list":
list_repositories(
ctx.all_repositories,
selected,
ctx.repositories_base_dir,
ctx.binaries_dir,
search_filter=args.search,
status_filter=args.status,
status_filter=getattr(args, "status", "") or "",
extra_tags=getattr(args, "tag", []) or [],
show_description=getattr(args, "description", False),
)
return
print(f"[ERROR] Unknown repos command: {args.command}")
sys.exit(2)

View File

@@ -5,9 +5,9 @@ import os
from typing import Any, Dict, List
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.run_command import run_command
from pkgmgr.get_repo_identifier import get_repo_identifier
from pkgmgr.cli.context import CLIContext
from pkgmgr.core.command.run import run_command
from pkgmgr.core.repository.identifier import get_repo_identifier
Repository = Dict[str, Any]

View File

@@ -4,12 +4,12 @@ import os
import sys
from typing import Any, Dict, List, Optional, Tuple
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.get_repo_dir import get_repo_dir
from pkgmgr.get_repo_identifier import get_repo_identifier
from pkgmgr.git_utils import get_tags
from pkgmgr.versioning import SemVer, find_latest_version
from pkgmgr.version_sources import (
from pkgmgr.cli.context import CLIContext
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.git import get_tags
from pkgmgr.core.version.semver import SemVer, find_latest_version
from pkgmgr.core.version.source import (
read_pyproject_version,
read_flake_version,
read_pkgbuild_version,

178
pkgmgr/cli/dispatch.py Normal file
View File

@@ -0,0 +1,178 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import os
import sys
from typing import List, Dict, Any
from pkgmgr.cli.context import CLIContext
from pkgmgr.cli.proxy import maybe_handle_proxy
from pkgmgr.core.repository.selected import get_selected_repos
from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.cli.commands import (
handle_repos_command,
handle_tools_command,
handle_release,
handle_version,
handle_config,
handle_make,
handle_changelog,
handle_branch,
)
def _has_explicit_selection(args) -> bool:
"""
Return True if the user explicitly selected repositories via
identifiers / --all / --category / --tag / --string.
"""
identifiers = getattr(args, "identifiers", []) or []
use_all = getattr(args, "all", False)
categories = getattr(args, "category", []) or []
tags = getattr(args, "tag", []) or []
string_filter = getattr(args, "string", "") or ""
return bool(
use_all
or identifiers
or categories
or tags
or string_filter
)
def _select_repo_for_current_directory(
ctx: CLIContext,
) -> List[Dict[str, Any]]:
"""
Heuristic: find the repository whose local directory matches the
current working directory or is the closest parent.
Example:
- Repo directory: /home/kevin/Repositories/foo
- CWD: /home/kevin/Repositories/foo/subdir
'foo' is selected.
"""
cwd = os.path.abspath(os.getcwd())
candidates: List[tuple[str, Dict[str, Any]]] = []
for repo in ctx.all_repositories:
repo_dir = repo.get("directory")
if not repo_dir:
try:
repo_dir = get_repo_dir(ctx.repositories_base_dir, repo)
except Exception:
repo_dir = None
if not repo_dir:
continue
repo_dir_abs = os.path.abspath(os.path.expanduser(repo_dir))
if cwd == repo_dir_abs or cwd.startswith(repo_dir_abs + os.sep):
candidates.append((repo_dir_abs, repo))
if not candidates:
return []
# Pick the repo with the longest (most specific) path.
candidates.sort(key=lambda item: len(item[0]), reverse=True)
return [candidates[0][1]]
def dispatch_command(args, ctx: CLIContext) -> None:
"""
Dispatch the parsed arguments to the appropriate command handler.
"""
# First: proxy commands (git / docker / docker compose / make wrapper etc.)
if maybe_handle_proxy(args, ctx):
return
# Commands that operate on repository selections
commands_with_selection: List[str] = [
"install",
"update",
"deinstall",
"delete",
"status",
"path",
"shell",
"create",
"list",
"make",
"release",
"version",
"changelog",
"explore",
"terminal",
"code",
]
if getattr(args, "command", None) in commands_with_selection:
if _has_explicit_selection(args):
# Classic selection logic (identifiers / --all / filters)
selected = get_selected_repos(args, ctx.all_repositories)
else:
# Default per help text: repository of current folder.
selected = _select_repo_for_current_directory(ctx)
# If none is found, leave 'selected' empty.
# Individual handlers will then emit a clear message instead
# of silently picking an unrelated repository.
else:
selected = []
# ------------------------------------------------------------------ #
# Repos-related commands
# ------------------------------------------------------------------ #
if args.command in (
"install",
"update",
"deinstall",
"delete",
"status",
"path",
"shell",
"create",
"list",
):
handle_repos_command(args, ctx, selected)
return
# ------------------------------------------------------------------ #
# Tools (explore / terminal / code)
# ------------------------------------------------------------------ #
if args.command in ("explore", "terminal", "code"):
handle_tools_command(args, ctx, selected)
return
# ------------------------------------------------------------------ #
# Release / Version / Changelog / Config / Make / Branch
# ------------------------------------------------------------------ #
if args.command == "release":
handle_release(args, ctx, selected)
return
if args.command == "version":
handle_version(args, ctx, selected)
return
if args.command == "changelog":
handle_changelog(args, ctx, selected)
return
if args.command == "config":
handle_config(args, ctx)
return
if args.command == "make":
handle_make(args, ctx, selected)
return
if args.command == "branch":
handle_branch(args, ctx)
return
print(f"Unknown command: {args.command}")
sys.exit(2)

View File

@@ -1,8 +1,11 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import argparse
from pkgmgr.cli_core.proxy import register_proxy_commands
from pkgmgr.cli.proxy import register_proxy_commands
class SortedSubParsersAction(argparse._SubParsersAction):
@@ -12,13 +15,20 @@ class SortedSubParsersAction(argparse._SubParsersAction):
def add_parser(self, name, **kwargs):
parser = super().add_parser(name, **kwargs)
# Sort choices alphabetically by dest (subcommand name)
self._choices_actions.sort(key=lambda a: a.dest)
return parser
def add_identifier_arguments(subparser: argparse.ArgumentParser) -> None:
"""
Attach generic repository selection arguments to a subparser.
Common identifier / selection arguments for many subcommands.
Selection modes (mutual intent, not hard-enforced):
- identifiers (positional): select by alias / provider/account/repo
- --all: select all repositories
- --category / --string / --tag: filter-based selection on top
of the full repository set
"""
subparser.add_argument(
"identifiers",
@@ -39,6 +49,33 @@ def add_identifier_arguments(subparser: argparse.ArgumentParser) -> None:
"yes | pkgmgr {subcommand} --all"
),
)
subparser.add_argument(
"--category",
nargs="+",
default=[],
help=(
"Filter repositories by category patterns derived from config "
"filenames or repo metadata (use filename without .yml/.yaml, "
"or /regex/ to use a regular expression)."
),
)
subparser.add_argument(
"--string",
default="",
help=(
"Filter repositories whose identifier / name / path contains this "
"substring (case-insensitive). Use /regex/ for regular expressions."
),
)
subparser.add_argument(
"--tag",
action="append",
default=[],
help=(
"Filter repositories by tag. Matches tags from the repository "
"collector and category tags. Use /regex/ for regular expressions."
),
)
subparser.add_argument(
"--preview",
action="store_true",
@@ -61,7 +98,7 @@ def add_identifier_arguments(subparser: argparse.ArgumentParser) -> None:
def add_install_update_arguments(subparser: argparse.ArgumentParser) -> None:
"""
Attach shared flags for install/update-like commands.
Common arguments for install/update commands.
"""
add_identifier_arguments(subparser)
subparser.add_argument(
@@ -94,10 +131,7 @@ def add_install_update_arguments(subparser: argparse.ArgumentParser) -> None:
def create_parser(description_text: str) -> argparse.ArgumentParser:
"""
Create and configure the top-level argument parser for pkgmgr.
This function defines *only* the CLI surface (arguments & subcommands),
but no business logic.
Create the top-level argument parser for pkgmgr.
"""
parser = argparse.ArgumentParser(
description=description_text,
@@ -110,7 +144,7 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
)
# ------------------------------------------------------------
# install / update
# install / update / deinstall / delete
# ------------------------------------------------------------
install_parser = subparsers.add_parser(
"install",
@@ -129,9 +163,6 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
help="Include system update commands",
)
# ------------------------------------------------------------
# deinstall / delete
# ------------------------------------------------------------
deinstall_parser = subparsers.add_parser(
"deinstall",
help="Remove alias links to repository/repositories",
@@ -147,7 +178,7 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
# ------------------------------------------------------------
# create
# ------------------------------------------------------------
create_parser = subparsers.add_parser(
create_cmd_parser = subparsers.add_parser(
"create",
help=(
"Create new repository entries: add them to the config if not "
@@ -155,8 +186,8 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
"remotely if --remote is set."
),
)
add_identifier_arguments(create_parser)
create_parser.add_argument(
add_identifier_arguments(create_cmd_parser)
create_cmd_parser.add_argument(
"--remote",
action="store_true",
help="If set, add the remote and push the initial commit.",
@@ -228,6 +259,14 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
help="Set ignore to true or false",
)
config_subparsers.add_parser(
"update",
help=(
"Update default config files in ~/.config/pkgmgr/ from the "
"installed pkgmgr package (does not touch config.yaml)."
),
)
# ------------------------------------------------------------
# path / explore / terminal / code / shell
# ------------------------------------------------------------
@@ -265,7 +304,10 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
"--command",
nargs=argparse.REMAINDER,
dest="shell_command",
help="The shell command (and its arguments) to execute in each repository",
help=(
"The shell command (and its arguments) to execute in each "
"repository"
),
default=[],
)
@@ -274,7 +316,7 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
# ------------------------------------------------------------
branch_parser = subparsers.add_parser(
"branch",
help="Branch-related utilities (e.g. open feature branches)",
help="Branch-related utilities (e.g. open/close feature branches)",
)
branch_subparsers = branch_parser.add_subparsers(
dest="subcommand",
@@ -289,7 +331,10 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
branch_open.add_argument(
"name",
nargs="?",
help="Name of the new branch (optional; will be asked interactively if omitted)",
help=(
"Name of the new branch (optional; will be asked interactively "
"if omitted)"
),
)
branch_open.add_argument(
"--base",
@@ -297,6 +342,26 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
help="Base branch to create the new branch from (default: main)",
)
branch_close = branch_subparsers.add_parser(
"close",
help="Merge a feature branch into base and delete it",
)
branch_close.add_argument(
"name",
nargs="?",
help=(
"Name of the branch to close (optional; current branch is used "
"if omitted)"
),
)
branch_close.add_argument(
"--base",
default="main",
help=(
"Base branch to merge into (default: main; falls back to master "
"internally if main does not exist)"
),
)
# ------------------------------------------------------------
# release
@@ -316,12 +381,32 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
release_parser.add_argument(
"-m",
"--message",
default="",
default=None,
help=(
"Optional release message to add to the changelog and tag."
),
)
# Generic selection / preview / list / extra_args
add_identifier_arguments(release_parser)
# Close current branch after successful release
release_parser.add_argument(
"--close",
action="store_true",
help=(
"Close the current branch after a successful release in each "
"repository, if it is not main/master."
),
)
# Force: skip preview+confirmation and run release directly
release_parser.add_argument(
"-f",
"--force",
action="store_true",
help=(
"Skip the interactive preview+confirmation step and run the "
"release directly."
),
)
# ------------------------------------------------------------
# version
@@ -330,7 +415,8 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
"version",
help=(
"Show version information for repository/ies "
"(git tags, pyproject.toml, flake.nix, PKGBUILD, debian, spec, Ansible Galaxy)."
"(git tags, pyproject.toml, flake.nix, PKGBUILD, debian, spec, "
"Ansible Galaxy)."
),
)
add_identifier_arguments(version_parser)
@@ -364,20 +450,29 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
"list",
help="List all repositories with details and status",
)
list_parser.add_argument(
"--search",
default="",
help="Filter repositories that contain the given string",
)
# dieselbe Selektionslogik wie bei install/update/etc.:
add_identifier_arguments(list_parser)
list_parser.add_argument(
"--status",
type=str,
default="",
help="Filter repositories by status (case insensitive)",
help=(
"Filter repositories by status (case insensitive). "
"Use /regex/ for regular expressions."
),
)
list_parser.add_argument(
"--description",
action="store_true",
help=(
"Show an additional detailed section per repository "
"(description, homepage, tags, categories, paths)."
),
)
# ------------------------------------------------------------
# make (wrapper around make in repositories)
# make
# ------------------------------------------------------------
make_parser = subparsers.add_parser(
"make",
@@ -403,7 +498,7 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
add_identifier_arguments(make_deinstall)
# ------------------------------------------------------------
# Proxy commands (git, docker, docker compose)
# Proxy commands (git, docker, docker compose, ...)
# ------------------------------------------------------------
register_proxy_commands(subparsers)

View File

@@ -1,14 +1,19 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import argparse
import os
import sys
from typing import Dict, List
from typing import Dict, List, Any
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.clone_repos import clone_repos
from pkgmgr.exec_proxy_command import exec_proxy_command
from pkgmgr.get_selected_repos import get_selected_repos
from pkgmgr.pull_with_verification import pull_with_verification
from pkgmgr.cli.context import CLIContext
from pkgmgr.actions.repository.clone import clone_repos
from pkgmgr.actions.proxy import exec_proxy_command
from pkgmgr.actions.repository.pull import pull_with_verification
from pkgmgr.core.repository.selected import get_selected_repos
from pkgmgr.core.repository.dir import get_repo_dir
PROXY_COMMANDS: Dict[str, List[str]] = {
@@ -42,10 +47,7 @@ PROXY_COMMANDS: Dict[str, List[str]] = {
def _add_proxy_identifier_arguments(parser: argparse.ArgumentParser) -> None:
"""
Local copy of the identifier argument set for proxy commands.
This duplicates the semantics of cli.parser.add_identifier_arguments
to avoid circular imports.
Selection arguments for proxy subcommands.
"""
parser.add_argument(
"identifiers",
@@ -66,6 +68,24 @@ def _add_proxy_identifier_arguments(parser: argparse.ArgumentParser) -> None:
"yes | pkgmgr {subcommand} --all"
),
)
parser.add_argument(
"--category",
nargs="+",
default=[],
help=(
"Filter repositories by category patterns derived from config "
"filenames or repo metadata (use filename without .yml/.yaml, "
"or /regex/ to use a regular expression)."
),
)
parser.add_argument(
"--string",
default="",
help=(
"Filter repositories whose identifier / name / path contains this "
"substring (case-insensitive). Use /regex/ for regular expressions."
),
)
parser.add_argument(
"--preview",
action="store_true",
@@ -86,12 +106,62 @@ def _add_proxy_identifier_arguments(parser: argparse.ArgumentParser) -> None:
)
def _proxy_has_explicit_selection(args: argparse.Namespace) -> bool:
"""
Same semantics as in the main dispatch:
True if the user explicitly selected repositories.
"""
identifiers = getattr(args, "identifiers", []) or []
use_all = getattr(args, "all", False)
categories = getattr(args, "category", []) or []
string_filter = getattr(args, "string", "") or ""
# Proxy commands currently do not support --tag, so it is not checked here.
return bool(
use_all
or identifiers
or categories
or string_filter
)
def _select_repo_for_current_directory(
ctx: CLIContext,
) -> List[Dict[str, Any]]:
"""
Heuristic: find the repository whose local directory matches the
current working directory or is the closest parent.
"""
cwd = os.path.abspath(os.getcwd())
candidates: List[tuple[str, Dict[str, Any]]] = []
for repo in ctx.all_repositories:
repo_dir = repo.get("directory")
if not repo_dir:
try:
repo_dir = get_repo_dir(ctx.repositories_base_dir, repo)
except Exception:
repo_dir = None
if not repo_dir:
continue
repo_dir_abs = os.path.abspath(os.path.expanduser(repo_dir))
if cwd == repo_dir_abs or cwd.startswith(repo_dir_abs + os.sep):
candidates.append((repo_dir_abs, repo))
if not candidates:
return []
# Pick the repo with the longest (most specific) path.
candidates.sort(key=lambda item: len(item[0]), reverse=True)
return [candidates[0][1]]
def register_proxy_commands(
subparsers: argparse._SubParsersAction,
) -> None:
"""
Register proxy commands (git, docker, docker compose) as
top-level subcommands on the given subparsers.
Register proxy subcommands for git, docker, docker compose, ...
"""
for command, subcommands in PROXY_COMMANDS.items():
for subcommand in subcommands:
@@ -100,7 +170,8 @@ def register_proxy_commands(
help=f"Proxies '{command} {subcommand}' to repository/ies",
description=(
f"Executes '{command} {subcommand}' for the "
"identified repos.\nTo recieve more help execute "
"selected repositories. "
"For more details see the underlying tool's help: "
f"'{command} {subcommand} --help'"
),
formatter_class=argparse.RawTextHelpFormatter,
@@ -129,8 +200,8 @@ def register_proxy_commands(
def maybe_handle_proxy(args: argparse.Namespace, ctx: CLIContext) -> bool:
"""
If the parsed command is a proxy command, execute it and return True.
Otherwise return False to let the main dispatcher continue.
If the top-level command is one of the proxy subcommands
(git / docker / docker compose), handle it here and return True.
"""
all_proxy_subcommands = {
sub for subs in PROXY_COMMANDS.values() for sub in subs
@@ -139,12 +210,17 @@ def maybe_handle_proxy(args: argparse.Namespace, ctx: CLIContext) -> bool:
if args.command not in all_proxy_subcommands:
return False
# Use generic selection semantics for proxies
selected = get_selected_repos(
getattr(args, "all", False),
ctx.all_repositories,
getattr(args, "identifiers", []),
)
# Default semantics: without explicit selection → repo of current folder.
if _proxy_has_explicit_selection(args):
selected = get_selected_repos(args, ctx.all_repositories)
else:
selected = _select_repo_for_current_directory(ctx)
if not selected:
print(
"[ERROR] No repository matches the current directory. "
"Specify identifiers or use --all/--category/--string."
)
sys.exit(1)
for command, subcommands in PROXY_COMMANDS.items():
if args.command not in subcommands:

View File

@@ -1,5 +0,0 @@
from .context import CLIContext
from .parser import create_parser
from .dispatch import dispatch_command
__all__ = ["CLIContext", "create_parser", "dispatch_command"]

View File

@@ -1,140 +0,0 @@
from __future__ import annotations
import os
import sys
from typing import Any, Dict, List
import yaml
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.config_init import config_init
from pkgmgr.interactive_add import interactive_add
from pkgmgr.resolve_repos import resolve_repos
from pkgmgr.save_user_config import save_user_config
from pkgmgr.show_config import show_config
from pkgmgr.run_command import run_command
def _load_user_config(user_config_path: str) -> Dict[str, Any]:
"""
Load the user config file, returning a default structure if it does not exist.
"""
if os.path.exists(user_config_path):
with open(user_config_path, "r") as f:
return yaml.safe_load(f) or {"repositories": []}
return {"repositories": []}
def handle_config(args, ctx: CLIContext) -> None:
"""
Handle the 'config' command and its subcommands.
"""
user_config_path = ctx.user_config_path
# --------------------------------------------------------
# config show
# --------------------------------------------------------
if args.subcommand == "show":
if args.all or (not args.identifiers):
show_config([], user_config_path, full_config=True)
else:
selected = resolve_repos(args.identifiers, ctx.all_repositories)
if selected:
show_config(
selected,
user_config_path,
full_config=False,
)
return
# --------------------------------------------------------
# config add
# --------------------------------------------------------
if args.subcommand == "add":
interactive_add(ctx.config_merged, user_config_path)
return
# --------------------------------------------------------
# config edit
# --------------------------------------------------------
if args.subcommand == "edit":
run_command(f"nano {user_config_path}")
return
# --------------------------------------------------------
# config init
# --------------------------------------------------------
if args.subcommand == "init":
user_config = _load_user_config(user_config_path)
config_init(
user_config,
ctx.config_merged,
ctx.binaries_dir,
user_config_path,
)
return
# --------------------------------------------------------
# config delete
# --------------------------------------------------------
if args.subcommand == "delete":
user_config = _load_user_config(user_config_path)
if args.all or not args.identifiers:
print("You must specify identifiers to delete.")
return
to_delete = resolve_repos(
args.identifiers,
user_config.get("repositories", []),
)
new_repos = [
entry
for entry in user_config.get("repositories", [])
if entry not in to_delete
]
user_config["repositories"] = new_repos
save_user_config(user_config, user_config_path)
print(f"Deleted {len(to_delete)} entries from user config.")
return
# --------------------------------------------------------
# config ignore
# --------------------------------------------------------
if args.subcommand == "ignore":
user_config = _load_user_config(user_config_path)
if args.all or not args.identifiers:
print("You must specify identifiers to modify ignore flag.")
return
to_modify = resolve_repos(
args.identifiers,
user_config.get("repositories", []),
)
for entry in user_config["repositories"]:
key = (
entry.get("provider"),
entry.get("account"),
entry.get("repository"),
)
for mod in to_modify:
mod_key = (
mod.get("provider"),
mod.get("account"),
mod.get("repository"),
)
if key == mod_key:
entry["ignore"] = args.set == "true"
print(
f"Set ignore for {key} to {entry['ignore']}"
)
save_user_config(user_config, user_config_path)
return
# If we end up here, something is wrong with subcommand routing
print(f"Unknown config subcommand: {args.subcommand}")
sys.exit(2)

View File

@@ -1,76 +0,0 @@
from __future__ import annotations
import os
import sys
from typing import Any, Dict, List, Optional
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.get_repo_dir import get_repo_dir
from pkgmgr import release as rel
Repository = Dict[str, Any]
def handle_release(
args,
ctx: CLIContext,
selected: List[Repository],
) -> None:
"""
Handle the 'release' command.
Creates a release by incrementing the version and updating the changelog
in a single selected repository.
Important:
- Releases are strictly limited to exactly ONE repository.
- Using --all or specifying multiple identifiers for release does
not make sense and is therefore rejected.
- The --preview flag is respected and passed through to the release
implementation so that no changes are made in preview mode.
"""
if not selected:
print("No repositories selected for release.")
sys.exit(1)
if len(selected) > 1:
print(
"[ERROR] Release operations are limited to a single repository.\n"
"Do not use --all or multiple identifiers with 'pkgmgr release'."
)
sys.exit(1)
original_dir = os.getcwd()
repo = selected[0]
repo_dir: Optional[str] = repo.get("directory")
if not repo_dir:
repo_dir = get_repo_dir(ctx.repositories_base_dir, repo)
if not os.path.isdir(repo_dir):
print(
f"[ERROR] Repository directory does not exist locally: {repo_dir}"
)
sys.exit(1)
pyproject_path = os.path.join(repo_dir, "pyproject.toml")
changelog_path = os.path.join(repo_dir, "CHANGELOG.md")
print(
f"Releasing repository '{repo.get('repository')}' in '{repo_dir}'..."
)
os.chdir(repo_dir)
try:
rel.release(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=args.release_type,
message=args.message,
preview=getattr(args, "preview", False),
)
finally:
os.chdir(original_dir)

View File

@@ -1,94 +0,0 @@
from __future__ import annotations
import sys
from typing import List
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.cli_core.proxy import maybe_handle_proxy
from pkgmgr.get_selected_repos import get_selected_repos
from pkgmgr.cli_core.commands import (
handle_repos_command,
handle_tools_command,
handle_release,
handle_version,
handle_config,
handle_make,
handle_changelog,
handle_branch,
)
def dispatch_command(args, ctx: CLIContext) -> None:
"""
Top-level command dispatcher.
Responsible for:
- computing selected repositories (where applicable)
- delegating to the correct command handler module
"""
# 1) Proxy commands (git, docker, docker compose) short-circuit.
if maybe_handle_proxy(args, ctx):
return
# 2) Determine if this command uses repository selection.
commands_with_selection: List[str] = [
"install",
"update",
"deinstall",
"delete",
"status",
"path",
"shell",
"code",
"explore",
"terminal",
"release",
"version",
"make",
"changelog",
# intentionally NOT "branch" it operates on cwd only
]
if args.command in commands_with_selection:
selected = get_selected_repos(
getattr(args, "all", False),
ctx.all_repositories,
getattr(args, "identifiers", []),
)
else:
selected = []
# 3) Delegate based on command.
if args.command in (
"install",
"update",
"deinstall",
"delete",
"status",
"path",
"shell",
"create",
"list",
):
handle_repos_command(args, ctx, selected)
elif args.command in ("code", "explore", "terminal"):
handle_tools_command(args, ctx, selected)
elif args.command == "release":
handle_release(args, ctx, selected)
elif args.command == "version":
handle_version(args, ctx, selected)
elif args.command == "changelog":
handle_changelog(args, ctx, selected)
elif args.command == "config":
handle_config(args, ctx)
elif args.command == "make":
handle_make(args, ctx, selected)
elif args.command == "branch":
# Branch commands currently operate on the current working
# directory only, not on the pkgmgr repository selection.
handle_branch(args, ctx)
else:
print(f"Unknown command: {args.command}")
sys.exit(2)

View File

@@ -1,75 +0,0 @@
import subprocess
import os
from pkgmgr.generate_alias import generate_alias
from pkgmgr.save_user_config import save_user_config
def config_init(user_config, defaults_config, bin_dir,USER_CONFIG_PATH:str):
"""
Scan the base directory (defaults_config["base"]) for repositories.
The folder structure is assumed to be:
{base}/{provider}/{account}/{repository}
For each repository found, automatically determine:
- provider, account, repository from folder names.
- verified: the latest commit (via 'git log -1 --format=%H').
- alias: generated from the repository name using generate_alias().
Repositories already defined in defaults_config["repositories"] or user_config["repositories"] are skipped.
"""
repositories_base_dir = os.path.expanduser(defaults_config["directories"]["repositories"])
if not os.path.isdir(repositories_base_dir):
print(f"Base directory '{repositories_base_dir}' does not exist.")
return
default_keys = {(entry.get("provider"), entry.get("account"), entry.get("repository"))
for entry in defaults_config.get("repositories", [])}
existing_keys = {(entry.get("provider"), entry.get("account"), entry.get("repository"))
for entry in user_config.get("repositories", [])}
existing_aliases = {entry.get("alias") for entry in user_config.get("repositories", []) if entry.get("alias")}
new_entries = []
for provider in os.listdir(repositories_base_dir):
provider_path = os.path.join(repositories_base_dir, provider)
if not os.path.isdir(provider_path):
continue
for account in os.listdir(provider_path):
account_path = os.path.join(provider_path, account)
if not os.path.isdir(account_path):
continue
for repo_name in os.listdir(account_path):
repo_path = os.path.join(account_path, repo_name)
if not os.path.isdir(repo_path):
continue
key = (provider, account, repo_name)
if key in default_keys or key in existing_keys:
continue
try:
result = subprocess.run(
["git", "log", "-1", "--format=%H"],
cwd=repo_path,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
check=True,
)
verified = result.stdout.strip()
except Exception as e:
verified = ""
print(f"Could not determine latest commit for {repo_name} ({provider}/{account}): {e}")
entry = {
"provider": provider,
"account": account,
"repository": repo_name,
"verified": {"commit": verified},
"ignore": True
}
alias = generate_alias({"repository": repo_name, "provider": provider, "account": account}, bin_dir, existing_aliases)
entry["alias"] = alias
existing_aliases.add(alias)
new_entries.append(entry)
print(f"Adding new repo entry: {entry}")
if new_entries:
user_config.setdefault("repositories", []).extend(new_entries)
save_user_config(user_config,USER_CONFIG_PATH)
else:
print("No new repositories found.")

View File

View File

@@ -2,8 +2,8 @@
# -*- coding: utf-8 -*-
import os
from pkgmgr.get_repo_identifier import get_repo_identifier
from pkgmgr.get_repo_dir import get_repo_dir
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.repository.dir import get_repo_dir
def create_ink(repo, repositories_base_dir, bin_dir, all_repos,

View File

305
pkgmgr/core/config/load.py Normal file
View File

@@ -0,0 +1,305 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Load and merge pkgmgr configuration.
Layering rules:
1. Defaults / category files:
- Zuerst werden alle *.yml/*.yaml (außer config.yaml) im
Benutzerverzeichnis geladen:
~/.config/pkgmgr/
- Falls dort keine passenden Dateien existieren, wird auf die im
Paket / Projekt mitgelieferten Config-Verzeichnisse zurückgegriffen:
<pkg_root>/config_defaults
<pkg_root>/config
<project_root>/config_defaults
<project_root>/config
Dabei werden ebenfalls alle *.yml/*.yaml als Layer geladen.
- Der Dateiname ohne Endung (stem) wird als Kategorie-Name
verwendet und in repo["category_files"] eingetragen.
2. User config:
- ~/.config/pkgmgr/config.yaml (oder der übergebene Pfad)
wird geladen und PER LISTEN-MERGE über die Defaults gelegt:
- directories: dict deep-merge
- repositories: per _merge_repo_lists (kein Löschen!)
3. Ergebnis:
- Ein dict mit mindestens:
config["directories"] (dict)
config["repositories"] (list[dict])
"""
from __future__ import annotations
import os
from pathlib import Path
from typing import Any, Dict, List, Tuple
import yaml
Repo = Dict[str, Any]
# ---------------------------------------------------------------------------
# Hilfsfunktionen
# ---------------------------------------------------------------------------
def _deep_merge(base: Dict[str, Any], override: Dict[str, Any]) -> Dict[str, Any]:
"""
Recursively merge two dictionaries.
Values from `override` win over values in `base`.
"""
for key, value in override.items():
if (
key in base
and isinstance(base[key], dict)
and isinstance(value, dict)
):
_deep_merge(base[key], value)
else:
base[key] = value
return base
def _repo_key(repo: Repo) -> Tuple[str, str, str]:
"""
Normalised key for identifying a repository across config files.
"""
return (
str(repo.get("provider", "")),
str(repo.get("account", "")),
str(repo.get("repository", "")),
)
def _merge_repo_lists(
base_list: List[Repo],
new_list: List[Repo],
category_name: str | None = None,
) -> List[Repo]:
"""
Merge two repository lists, matching by (provider, account, repository).
- Wenn ein Repo aus new_list noch nicht existiert, wird es hinzugefügt.
- Wenn es existiert, werden seine Felder per Deep-Merge überschrieben.
- Wenn category_name gesetzt ist, wird dieser in
repo["category_files"] eingetragen.
"""
index: Dict[Tuple[str, str, str], Repo] = {
_repo_key(r): r for r in base_list
}
for src in new_list:
key = _repo_key(src)
if key == ("", "", ""):
# Unvollständiger Schlüssel -> einfach anhängen
dst = dict(src)
if category_name:
dst.setdefault("category_files", [])
if category_name not in dst["category_files"]:
dst["category_files"].append(category_name)
base_list.append(dst)
continue
existing = index.get(key)
if existing is None:
dst = dict(src)
if category_name:
dst.setdefault("category_files", [])
if category_name not in dst["category_files"]:
dst["category_files"].append(category_name)
base_list.append(dst)
index[key] = dst
else:
_deep_merge(existing, src)
if category_name:
existing.setdefault("category_files", [])
if category_name not in existing["category_files"]:
existing["category_files"].append(category_name)
return base_list
def _load_yaml_file(path: Path) -> Dict[str, Any]:
"""
Load a single YAML file as dict. Non-dicts yield {}.
"""
if not path.is_file():
return {}
with path.open("r", encoding="utf-8") as f:
data = yaml.safe_load(f) or {}
if not isinstance(data, dict):
return {}
return data
def _load_layer_dir(
config_dir: Path,
skip_filename: str | None = None,
) -> Dict[str, Any]:
"""
Load all *.yml/*.yaml from a directory as layered defaults.
- skip_filename: Dateiname (z.B. "config.yaml"), der ignoriert
werden soll (z.B. User-Config).
Rückgabe:
{
"directories": {...},
"repositories": [...],
}
"""
defaults: Dict[str, Any] = {"directories": {}, "repositories": []}
if not config_dir.is_dir():
return defaults
yaml_files = [
p
for p in config_dir.iterdir()
if p.is_file()
and p.suffix.lower() in (".yml", ".yaml")
and (skip_filename is None or p.name != skip_filename)
]
if not yaml_files:
return defaults
yaml_files.sort(key=lambda p: p.name)
for path in yaml_files:
data = _load_yaml_file(path)
category_name = path.stem # Dateiname ohne .yml/.yaml
dirs = data.get("directories")
if isinstance(dirs, dict):
defaults.setdefault("directories", {})
_deep_merge(defaults["directories"], dirs)
repos = data.get("repositories")
if isinstance(repos, list):
defaults.setdefault("repositories", [])
_merge_repo_lists(
defaults["repositories"],
repos,
category_name=category_name,
)
return defaults
def _load_defaults_from_package_or_project() -> Dict[str, Any]:
"""
Fallback: Versuche Defaults aus dem installierten Paket ODER
aus dem Projekt-Root zu laden:
<pkg_root>/config_defaults
<pkg_root>/config
<project_root>/config_defaults
<project_root>/config
"""
try:
import pkgmgr # type: ignore
except Exception:
return {"directories": {}, "repositories": []}
pkg_root = Path(pkgmgr.__file__).resolve().parent
project_root = pkg_root.parent
candidates = [
pkg_root / "config_defaults",
pkg_root / "config",
project_root / "config_defaults",
project_root / "config",
]
for cand in candidates:
defaults = _load_layer_dir(cand, skip_filename=None)
if defaults["directories"] or defaults["repositories"]:
return defaults
return {"directories": {}, "repositories": []}
# ---------------------------------------------------------------------------
# Hauptfunktion
# ---------------------------------------------------------------------------
def load_config(user_config_path: str) -> Dict[str, Any]:
"""
Load and merge configuration for pkgmgr.
Schritte:
1. Ermittle ~/.config/pkgmgr/ (oder das Verzeichnis von user_config_path).
2. Lade alle *.yml/*.yaml dort (außer der User-Config selbst) als
Defaults / Kategorie-Layer.
3. Wenn dort nichts gefunden wurde, Fallback auf Paket/Projekt.
4. Lade die User-Config-Datei selbst (falls vorhanden).
5. Merge:
- directories: deep-merge (Defaults <- User)
- repositories: _merge_repo_lists (Defaults <- User)
"""
user_config_path_expanded = os.path.expanduser(user_config_path)
user_cfg_path = Path(user_config_path_expanded)
config_dir = user_cfg_path.parent
if not str(config_dir):
# Fallback, falls jemand nur "config.yaml" übergibt
config_dir = Path(os.path.expanduser("~/.config/pkgmgr"))
config_dir.mkdir(parents=True, exist_ok=True)
user_cfg_name = user_cfg_path.name
# 1+2) Defaults / Kategorie-Layer aus dem User-Verzeichnis
defaults = _load_layer_dir(config_dir, skip_filename=user_cfg_name)
# 3) Falls dort nichts gefunden wurde, Fallback auf Paket/Projekt
if not defaults["directories"] and not defaults["repositories"]:
defaults = _load_defaults_from_package_or_project()
defaults.setdefault("directories", {})
defaults.setdefault("repositories", [])
# 4) User-Config
user_cfg: Dict[str, Any] = {}
if user_cfg_path.is_file():
user_cfg = _load_yaml_file(user_cfg_path)
user_cfg.setdefault("directories", {})
user_cfg.setdefault("repositories", [])
# 5) Merge: directories deep-merge, repositories listen-merge
merged: Dict[str, Any] = {}
# directories
merged["directories"] = {}
_deep_merge(merged["directories"], defaults["directories"])
_deep_merge(merged["directories"], user_cfg["directories"])
# repositories
merged["repositories"] = []
_merge_repo_lists(merged["repositories"], defaults["repositories"], category_name=None)
_merge_repo_lists(merged["repositories"], user_cfg["repositories"], category_name=None)
# andere Top-Level-Keys (falls vorhanden)
other_keys = (set(defaults.keys()) | set(user_cfg.keys())) - {
"directories",
"repositories",
}
for key in other_keys:
base_val = defaults.get(key)
override_val = user_cfg.get(key)
if isinstance(base_val, dict) and isinstance(override_val, dict):
merged[key] = _deep_merge(dict(base_val), override_val)
elif override_val is not None:
merged[key] = override_val
else:
merged[key] = base_val
return merged

View File

View File

@@ -0,0 +1,200 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import annotations
import os
import re
from typing import Any, Dict, List, Sequence
from pkgmgr.core.repository.resolve import resolve_repos
from pkgmgr.core.repository.ignored import filter_ignored
Repository = Dict[str, Any]
def _compile_maybe_regex(pattern: str):
"""
If pattern is of the form /.../, return a compiled regex (case-insensitive).
Otherwise return None.
"""
if len(pattern) >= 2 and pattern.startswith("/") and pattern.endswith("/"):
try:
return re.compile(pattern[1:-1], re.IGNORECASE)
except re.error:
return None
return None
def _match_pattern(value: str, pattern: str) -> bool:
"""
Match a value against a pattern that may be a substring or /regex/.
"""
if not pattern:
return True
regex = _compile_maybe_regex(pattern)
if regex:
return bool(regex.search(value))
return pattern.lower() in value.lower()
def _match_any(values: Sequence[str], pattern: str) -> bool:
"""
Return True if any of the values matches the pattern.
"""
for v in values:
if _match_pattern(v, pattern):
return True
return False
def _build_identifier_string(repo: Repository) -> str:
"""
Build a combined identifier string for string-based filtering.
"""
provider = str(repo.get("provider", ""))
account = str(repo.get("account", ""))
repository = str(repo.get("repository", ""))
alias = str(repo.get("alias", ""))
description = str(repo.get("description", ""))
directory = str(repo.get("directory", ""))
parts = [
provider,
account,
repository,
alias,
f"{provider}/{account}/{repository}",
description,
directory,
]
return " ".join(p for p in parts if p)
def _apply_filters(
repos: List[Repository],
string_pattern: str,
category_patterns: List[str],
tag_patterns: List[str],
) -> List[Repository]:
if not string_pattern and not category_patterns and not tag_patterns:
return repos
filtered: List[Repository] = []
for repo in repos:
# String filter
if string_pattern:
ident_str = _build_identifier_string(repo)
if not _match_pattern(ident_str, string_pattern):
continue
# Category filter: only real categories, NOT tags
if category_patterns:
cats: List[str] = []
cats.extend(map(str, repo.get("category_files", [])))
if "category" in repo:
cats.append(str(repo["category"]))
if not cats:
continue
ok = True
for pat in category_patterns:
if not _match_any(cats, pat):
ok = False
break
if not ok:
continue
# Tag filter: YAML tags only
if tag_patterns:
tags: List[str] = list(map(str, repo.get("tags", [])))
if not tags:
continue
ok = True
for pat in tag_patterns:
if not _match_any(tags, pat):
ok = False
break
if not ok:
continue
filtered.append(repo)
return filtered
def _maybe_filter_ignored(args, repos: List[Repository]) -> List[Repository]:
"""
Apply ignore filtering unless the caller explicitly opted to include ignored
repositories (via args.include_ignored).
Note: this helper is used only for *implicit* selections (all / filters /
by-directory). For *explicit* identifiers we do NOT filter ignored repos,
so the user can still target them directly if desired.
"""
include_ignored: bool = bool(getattr(args, "include_ignored", False))
if include_ignored:
return repos
return filter_ignored(repos)
def get_selected_repos(args, all_repositories: List[Repository]) -> List[Repository]:
"""
Compute the list of repositories selected by CLI arguments.
Modes:
- If identifiers are given: select via resolve_repos() from all_repositories.
Ignored repositories are *not* filtered here, so explicit identifiers
always win.
- Else if any of --category/--string/--tag is used: start from
all_repositories, apply filters and then drop ignored repos.
- Else if --all is set: select all_repositories and then drop ignored repos.
- Else: try to select the repository of the current working directory
and then drop it if it is ignored.
The ignore filter can be bypassed by setting args.include_ignored = True
(e.g. via a CLI flag --include-ignored).
"""
identifiers: List[str] = getattr(args, "identifiers", []) or []
use_all: bool = bool(getattr(args, "all", False))
category_patterns: List[str] = getattr(args, "category", []) or []
string_pattern: str = getattr(args, "string", "") or ""
tag_patterns: List[str] = getattr(args, "tag", []) or []
has_filters = bool(category_patterns or string_pattern or tag_patterns)
# 1) Explicit identifiers win and bypass ignore filtering
if identifiers:
base = resolve_repos(identifiers, all_repositories)
return _apply_filters(base, string_pattern, category_patterns, tag_patterns)
# 2) Filter-only mode: start from all repositories
if has_filters:
base = _apply_filters(
list(all_repositories),
string_pattern,
category_patterns,
tag_patterns,
)
return _maybe_filter_ignored(args, base)
# 3) --all (no filters): all repos
if use_all:
base = list(all_repositories)
return _maybe_filter_ignored(args, base)
# 4) Fallback: try to select repository of current working directory
cwd = os.path.abspath(os.getcwd())
by_dir = [
repo
for repo in all_repositories
if os.path.abspath(str(repo.get("directory", ""))) == cwd
]
if by_dir:
return _maybe_filter_ignored(args, by_dir)
# No specific match -> empty list
return []

View File

View File

@@ -1,29 +0,0 @@
import os
import sys
from .resolve_repos import resolve_repos
from .filter_ignored import filter_ignored
from .get_repo_dir import get_repo_dir
def get_selected_repos(show_all: bool, all_repos_list, identifiers=None):
if show_all:
selected = all_repos_list
else:
selected = resolve_repos(identifiers, all_repos_list)
# If no repositories were found using the provided identifiers,
# try to automatically select based on the current directory:
if not selected:
current_dir = os.getcwd()
directory_name = os.path.basename(current_dir)
# Pack the directory name in a list since resolve_repos expects a list.
auto_selected = resolve_repos([directory_name], all_repos_list)
if auto_selected:
# Check if the path of the first auto-selected repository matches the current directory.
if os.path.abspath(auto_selected[0].get("directory")) == os.path.abspath(current_dir):
print(f"Repository {auto_selected[0]['repository']} has been auto-selected by path.")
selected = auto_selected
filtered = filter_ignored(selected)
if not filtered:
print("Error: No repositories had been selected.")
sys.exit(4)
return filtered

View File

@@ -1,19 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Installer package for pkgmgr.
This exposes all installer classes so users can import them directly from
pkgmgr.installers.
"""
from pkgmgr.installers.base import BaseInstaller # noqa: F401
from pkgmgr.installers.nix_flake import NixFlakeInstaller # noqa: F401
from pkgmgr.installers.python import PythonInstaller # noqa: F401
from pkgmgr.installers.makefile import MakefileInstaller # noqa: F401
# OS-specific installers
from pkgmgr.installers.os_packages.arch_pkgbuild import ArchPkgbuildInstaller # noqa: F401
from pkgmgr.installers.os_packages.debian_control import DebianControlInstaller # noqa: F401
from pkgmgr.installers.os_packages.rpm_spec import RpmSpecInstaller # noqa: F401

View File

@@ -1,108 +0,0 @@
import os
from pkgmgr.get_repo_identifier import get_repo_identifier
from pkgmgr.get_repo_dir import get_repo_dir
def list_repositories(all_repos, repositories_base_dir, bin_dir, search_filter="", status_filter=""):
"""
Lists all repositories with their attributes and status information.
The repositories are sorted in ascending order by their identifier.
Parameters:
all_repos (list): List of repository configurations.
repositories_base_dir (str): The base directory where repositories are located.
bin_dir (str): The directory where executable wrappers are stored.
search_filter (str): Filter for repository attributes (case insensitive).
status_filter (str): Filter for computed status info (case insensitive).
For each repository, the identifier is printed in bold, the description (if available)
in italic, then all other attributes and computed status are printed.
If the repository is installed, a hint is displayed under the attributes.
Repositories are filtered out if either the search_filter is not found in any attribute or
if the status_filter is not found in the computed status string.
"""
search_filter = search_filter.lower() if search_filter else ""
status_filter = status_filter.lower() if status_filter else ""
# Define status colors using colors not used for other attributes:
# Avoid red (for ignore), blue (for homepage) and yellow (for verified).
status_colors = {
"Installed": "\033[1;32m", # Green
"Not Installed": "\033[1;35m", # Magenta
"Cloned": "\033[1;36m", # Cyan
"Clonable": "\033[1;37m", # White
"Ignored": "\033[38;5;208m", # Orange (extended)
"Active": "\033[38;5;129m", # Light Purple (extended)
"Installable": "\033[38;5;82m" # Light Green (extended)
}
# Sort all repositories by their identifier in ascending order.
sorted_repos = sorted(all_repos, key=lambda repo: get_repo_identifier(repo, all_repos))
for repo in sorted_repos:
# Combine all attribute values into one string for filtering.
repo_text = " ".join(str(v) for v in repo.values()).lower()
if search_filter and search_filter not in repo_text:
continue
# Compute status information for the repository.
identifier = get_repo_identifier(repo, all_repos)
executable_path = os.path.join(bin_dir, identifier)
repo_dir = get_repo_dir(repositories_base_dir, repo)
status_list = []
# Check if the executable exists (Installed).
if os.path.exists(executable_path):
status_list.append("Installed")
else:
status_list.append("Not Installed")
# Check if the repository directory exists (Cloned).
if os.path.exists(repo_dir):
status_list.append("Cloned")
else:
status_list.append("Clonable")
# Mark ignored repositories.
if repo.get("ignore", False):
status_list.append("Ignored")
else:
status_list.append("Active")
# Define installable as cloned but not installed.
if os.path.exists(repo_dir) and not os.path.exists(executable_path):
status_list.append("Installable")
# Build a colored status string.
colored_statuses = [f"{status_colors.get(s, '')}{s}\033[0m" for s in status_list]
status_str = ", ".join(colored_statuses)
# If a status_filter is provided, only display repos whose status contains the filter.
if status_filter and status_filter not in status_str.lower():
continue
# Display repository details:
# Print the identifier in bold.
print(f"\033[1m{identifier}\033[0m")
# Print the description in italic if it exists.
description = repo.get("description")
if description:
print(f"\n\033[3m{description}\033[0m")
print("\nAttributes:")
# Loop through all attributes.
for key, value in repo.items():
formatted_value = str(value)
# Special formatting for the "verified" attribute (yellow).
if key == "verified" and value:
formatted_value = f"\033[1;33m{value}\033[0m"
# Special formatting for the "ignore" flag (red if True).
if key == "ignore" and value:
formatted_value = f"\033[1;31m{value}\033[0m"
if key == "description":
continue
# Highlight homepage in blue.
if key.lower() == "homepage" and value:
formatted_value = f"\033[1;34m{value}\033[0m"
print(f" {key}: {formatted_value}")
# Always display the computed status.
print(f" Status: {status_str}")
# If the repository is installed, display a hint for more info.
if os.path.exists(executable_path):
print(f"\nMore information and help: \033[1;4mpkgmgr {identifier} --help\033[0m\n")
print("-" * 40)

View File

@@ -1,30 +0,0 @@
import sys
import yaml
import os
from .get_repo_dir import get_repo_dir
DEFAULT_CONFIG_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), "../","config", "defaults.yaml")
def load_config(user_config_path):
"""Load configuration from defaults and merge in user config if present."""
if not os.path.exists(DEFAULT_CONFIG_PATH):
print(f"Default configuration file '{DEFAULT_CONFIG_PATH}' not found.")
sys.exit(5)
with open(DEFAULT_CONFIG_PATH, 'r') as f:
config = yaml.safe_load(f)
if "directories" not in config or "repositories" not in config:
print("Default config file must contain 'directories' and 'repositories' keys.")
sys.exit(6)
if os.path.exists(user_config_path):
with open(user_config_path, 'r') as f:
user_config = yaml.safe_load(f)
if user_config:
if "directories" in user_config:
config["directories"] = user_config["directories"]
if "repositories" in user_config:
config["repositories"].extend(user_config["repositories"])
for repository in config["repositories"]:
# You can overwritte the directory path in the config
if "directory" not in repository:
directory = get_repo_dir(config["directories"]["repositories"], repository)
repository["directory"] = os.path.expanduser(directory)
return config

View File

@@ -1,939 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
pkgmgr/release.py
Release helper for pkgmgr.
Responsibilities (Milestone 7):
- Determine the next semantic version based on existing Git tags.
- Update pyproject.toml with the new version.
- Update additional packaging files (flake.nix, PKGBUILD,
debian/changelog, RPM spec) where present.
- Prepend a basic entry to CHANGELOG.md.
- Commit, tag, and push the release on the current branch.
Additional behaviour:
- If `preview=True` (from --preview), no files are written and no
Git commands are executed. Instead, a detailed summary of the
planned changes and commands is printed.
- If `preview=False` and not forced, the release is executed in two
phases:
1) Preview-only run (dry-run).
2) Interactive confirmation, then real release if confirmed.
This confirmation can be skipped with the `-f/--force` flag.
- If `-c/--close` is used and the current branch is not main/master,
the branch will be closed via branch_commands.close_branch() after
a successful release.
"""
from __future__ import annotations
import argparse
import os
import re
import subprocess
import sys
import tempfile
from datetime import date, datetime
from typing import Optional, Tuple
from pkgmgr.git_utils import get_tags, get_current_branch, GitError
from pkgmgr.branch_commands import close_branch
from pkgmgr.versioning import (
SemVer,
find_latest_version,
bump_major,
bump_minor,
bump_patch,
)
# ---------------------------------------------------------------------------
# Helpers for Git + version discovery
# ---------------------------------------------------------------------------
def _determine_current_version() -> SemVer:
"""
Determine the current semantic version from Git tags.
Behaviour:
- If there are no tags or no SemVer-compatible tags, return 0.0.0.
- Otherwise, use the latest SemVer tag as current version.
"""
tags = get_tags()
if not tags:
return SemVer(0, 0, 0)
latest = find_latest_version(tags)
if latest is None:
return SemVer(0, 0, 0)
_tag, ver = latest
return ver
def _bump_semver(current: SemVer, release_type: str) -> SemVer:
"""
Bump the given SemVer according to the release type.
release_type must be one of: "major", "minor", "patch".
"""
if release_type == "major":
return bump_major(current)
if release_type == "minor":
return bump_minor(current)
if release_type == "patch":
return bump_patch(current)
raise ValueError(f"Unknown release type: {release_type!r}")
# ---------------------------------------------------------------------------
# Low-level Git command helper
# ---------------------------------------------------------------------------
def _run_git_command(cmd: str) -> None:
"""
Run a Git (or shell) command with basic error reporting.
The command is executed via the shell, primarily for readability
when printed (as in 'git commit -am "msg"').
"""
print(f"[GIT] {cmd}")
try:
subprocess.run(cmd, shell=True, check=True)
except subprocess.CalledProcessError as exc:
print(f"[ERROR] Git command failed: {cmd}")
print(f" Exit code: {exc.returncode}")
if exc.stdout:
print("--- stdout ---")
print(exc.stdout)
if exc.stderr:
print("--- stderr ---")
print(exc.stderr)
raise GitError(f"Git command failed: {cmd}") from exc
# ---------------------------------------------------------------------------
# Editor helper for interactive changelog messages
# ---------------------------------------------------------------------------
def _open_editor_for_changelog(initial_message: Optional[str] = None) -> str:
"""
Open $EDITOR (fallback 'nano') so the user can enter a changelog message.
The temporary file is pre-filled with commented instructions and an
optional initial_message. Lines starting with '#' are ignored when the
message is read back.
Returns the final message (may be empty string if user leaves it blank).
"""
editor = os.environ.get("EDITOR", "nano")
with tempfile.NamedTemporaryFile(
mode="w+",
delete=False,
encoding="utf-8",
) as tmp:
tmp_path = tmp.name
# Prefill with instructions as comments
tmp.write(
"# Write the changelog entry for this release.\n"
"# Lines starting with '#' will be ignored.\n"
"# Empty result will fall back to a generic message.\n\n"
)
if initial_message:
tmp.write(initial_message.strip() + "\n")
tmp.flush()
# Open editor
subprocess.call([editor, tmp_path])
# Read back content
try:
with open(tmp_path, "r", encoding="utf-8") as f:
content = f.read()
finally:
try:
os.remove(tmp_path)
except OSError:
pass
# Filter out commented lines and return joined text
lines = [
line for line in content.splitlines()
if not line.strip().startswith("#")
]
return "\n".join(lines).strip()
# ---------------------------------------------------------------------------
# File update helpers (pyproject + extra packaging + changelog)
# ---------------------------------------------------------------------------
def update_pyproject_version(
pyproject_path: str,
new_version: str,
preview: bool = False,
) -> None:
"""
Update the version in pyproject.toml with the new version.
The function looks for a line matching:
version = "X.Y.Z"
and replaces the version part with the given new_version string.
It does not try to parse the full TOML structure here. This keeps the
implementation small and robust as long as the version line follows
the standard pattern.
Behaviour:
- In normal mode: write the updated content back to the file.
- In preview mode: do NOT write, only report what would change.
"""
try:
with open(pyproject_path, "r", encoding="utf-8") as f:
content = f.read()
except FileNotFoundError:
print(f"[ERROR] pyproject.toml not found at: {pyproject_path}")
sys.exit(1)
pattern = r'^(version\s*=\s*")([^"]+)(")'
new_content, count = re.subn(
pattern,
lambda m: f'{m.group(1)}{new_version}{m.group(3)}',
content,
flags=re.MULTILINE,
)
if count == 0:
print("[ERROR] Could not find version line in pyproject.toml")
sys.exit(1)
if preview:
print(f"[PREVIEW] Would update pyproject.toml version to {new_version}")
return
with open(pyproject_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(f"Updated pyproject.toml version to {new_version}")
def update_flake_version(
flake_path: str,
new_version: str,
preview: bool = False,
) -> None:
"""
Update the version in flake.nix, if present.
Looks for a line like:
version = "1.2.3";
and replaces the string inside the quotes. If the file does not
exist or no version line is found, this is treated as a non-fatal
condition and only a log message is printed.
"""
if not os.path.exists(flake_path):
print("[INFO] flake.nix not found, skipping.")
return
try:
with open(flake_path, "r", encoding="utf-8") as f:
content = f.read()
except Exception as exc:
print(f"[WARN] Could not read flake.nix: {exc}")
return
pattern = r'(version\s*=\s*")([^"]+)(")'
new_content, count = re.subn(
pattern,
lambda m: f'{m.group(1)}{new_version}{m.group(3)}',
content,
)
if count == 0:
print("[WARN] No version assignment found in flake.nix, skipping.")
return
if preview:
print(f"[PREVIEW] Would update flake.nix version to {new_version}")
return
with open(flake_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(f"Updated flake.nix version to {new_version}")
def update_pkgbuild_version(
pkgbuild_path: str,
new_version: str,
preview: bool = False,
) -> None:
"""
Update the version in PKGBUILD, if present.
Expects:
pkgver=1.2.3
pkgrel=1
Behaviour:
- Set pkgver to the new_version (e.g. 1.2.3).
- Reset pkgrel to 1.
If the file does not exist, this is non-fatal and only a log
message is printed.
"""
if not os.path.exists(pkgbuild_path):
print("[INFO] PKGBUILD not found, skipping.")
return
try:
with open(pkgbuild_path, "r", encoding="utf-8") as f:
content = f.read()
except Exception as exc:
print(f"[WARN] Could not read PKGBUILD: {exc}")
return
# Update pkgver
ver_pattern = r"^(pkgver\s*=\s*)(.+)$"
new_content, ver_count = re.subn(
ver_pattern,
lambda m: f"{m.group(1)}{new_version}",
content,
flags=re.MULTILINE,
)
if ver_count == 0:
print("[WARN] No pkgver line found in PKGBUILD.")
new_content = content # revert to original if we didn't change anything
# Reset pkgrel to 1
rel_pattern = r"^(pkgrel\s*=\s*)(.+)$"
new_content, rel_count = re.subn(
rel_pattern,
lambda m: f"{m.group(1)}1",
new_content,
flags=re.MULTILINE,
)
if rel_count == 0:
print("[WARN] No pkgrel line found in PKGBUILD.")
if preview:
print(f"[PREVIEW] Would update PKGBUILD to pkgver={new_version}, pkgrel=1")
return
with open(pkgbuild_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(f"Updated PKGBUILD to pkgver={new_version}, pkgrel=1")
def update_spec_version(
spec_path: str,
new_version: str,
preview: bool = False,
) -> None:
"""
Update the version in an RPM spec file, if present.
Assumes a file like 'package-manager.spec' with lines:
Version: 1.2.3
Release: 1%{?dist}
Behaviour:
- Set 'Version:' to new_version.
- Reset 'Release:' to '1' while preserving any macro suffix,
e.g. '1%{?dist}'.
If the file does not exist, this is non-fatal and only a log
message is printed.
"""
if not os.path.exists(spec_path):
print("[INFO] RPM spec file not found, skipping.")
return
try:
with open(spec_path, "r", encoding="utf-8") as f:
content = f.read()
except Exception as exc:
print(f"[WARN] Could not read spec file: {exc}")
return
# Update Version:
ver_pattern = r"^(Version:\s*)(.+)$"
new_content, ver_count = re.subn(
ver_pattern,
lambda m: f"{m.group(1)}{new_version}",
content,
flags=re.MULTILINE,
)
if ver_count == 0:
print("[WARN] No 'Version:' line found in spec file.")
# Reset Release:
rel_pattern = r"^(Release:\s*)(.+)$"
def _release_repl(m: re.Match[str]) -> str: # type: ignore[name-defined]
rest = m.group(2).strip()
# Reset numeric prefix to "1" and keep any suffix (e.g. % macros).
match = re.match(r"^(\d+)(.*)$", rest)
if match:
suffix = match.group(2)
else:
suffix = ""
return f"{m.group(1)}1{suffix}"
new_content, rel_count = re.subn(
rel_pattern,
_release_repl,
new_content,
flags=re.MULTILINE,
)
if rel_count == 0:
print("[WARN] No 'Release:' line found in spec file.")
if preview:
print(
f"[PREVIEW] Would update spec file "
f"{os.path.basename(spec_path)} to Version: {new_version}, Release: 1..."
)
return
with open(spec_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(
f"Updated spec file {os.path.basename(spec_path)} "
f"to Version: {new_version}, Release: 1..."
)
def update_changelog(
changelog_path: str,
new_version: str,
message: Optional[str] = None,
preview: bool = False,
) -> str:
"""
Prepend a new release section to CHANGELOG.md with the new version,
current date, and a message.
Behaviour:
- If message is None and preview is False:
→ open $EDITOR (fallback 'nano') to let the user enter a message.
- If message is None and preview is True:
→ use a generic automated message.
- The resulting changelog entry is printed to stdout.
- Returns the final message text used.
"""
today = date.today().isoformat()
# Resolve message
if message is None:
if preview:
# Do not open editor in preview mode; keep it non-interactive.
message = "Automated release."
else:
print(
"\n[INFO] No release message provided, opening editor for "
"changelog entry...\n"
)
editor_message = _open_editor_for_changelog()
if not editor_message:
message = "Automated release."
else:
message = editor_message
header = f"## [{new_version}] - {today}\n"
header += f"\n* {message}\n\n"
if os.path.exists(changelog_path):
try:
with open(changelog_path, "r", encoding="utf-8") as f:
changelog = f.read()
except Exception as exc:
print(f"[WARN] Could not read existing CHANGELOG.md: {exc}")
changelog = ""
else:
changelog = ""
new_changelog = header + "\n" + changelog if changelog else header
# Show the entry that will be written
print("\n================ CHANGELOG ENTRY ================")
print(header.rstrip())
print("=================================================\n")
if preview:
print(f"[PREVIEW] Would prepend new entry for {new_version} to CHANGELOG.md")
return message
with open(changelog_path, "w", encoding="utf-8") as f:
f.write(new_changelog)
print(f"Updated CHANGELOG.md with version {new_version}")
return message
# ---------------------------------------------------------------------------
# Debian changelog helpers (with Git config fallback for maintainer)
# ---------------------------------------------------------------------------
def _get_git_config_value(key: str) -> Optional[str]:
"""
Try to read a value from `git config --get <key>`.
Returns the stripped value or None if not set / on error.
"""
try:
result = subprocess.run(
["git", "config", "--get", key],
capture_output=True,
text=True,
check=False,
)
except Exception:
return None
value = result.stdout.strip()
return value or None
def _get_debian_author() -> Tuple[str, str]:
"""
Determine the maintainer name/email for debian/changelog entries.
Priority:
1. DEBFULLNAME / DEBEMAIL
2. GIT_AUTHOR_NAME / GIT_AUTHOR_EMAIL
3. git config user.name / user.email
4. Fallback: 'Unknown Maintainer' / 'unknown@example.com'
"""
name = os.environ.get("DEBFULLNAME")
email = os.environ.get("DEBEMAIL")
if not name:
name = os.environ.get("GIT_AUTHOR_NAME")
if not email:
email = os.environ.get("GIT_AUTHOR_EMAIL")
if not name:
name = _get_git_config_value("user.name")
if not email:
email = _get_git_config_value("user.email")
if not name:
name = "Unknown Maintainer"
if not email:
email = "unknown@example.com"
return name, email
def update_debian_changelog(
debian_changelog_path: str,
package_name: str,
new_version: str,
message: Optional[str] = None,
preview: bool = False,
) -> None:
"""
Prepend a new entry to debian/changelog, if it exists.
The first line typically looks like:
package-name (1.2.3-1) unstable; urgency=medium
We generate a new stanza at the top with Debian-style version
'X.Y.Z-1'. If the file does not exist, this function does nothing.
"""
if not os.path.exists(debian_changelog_path):
print("[INFO] debian/changelog not found, skipping.")
return
debian_version = f"{new_version}-1"
now = datetime.now().astimezone()
# Debian-like date string, e.g. "Mon, 08 Dec 2025 12:34:56 +0100"
date_str = now.strftime("%a, %d %b %Y %H:%M:%S %z")
author_name, author_email = _get_debian_author()
first_line = f"{package_name} ({debian_version}) unstable; urgency=medium"
body_line = (
message.strip() if message else f"Automated release {new_version}."
)
stanza = (
f"{first_line}\n\n"
f" * {body_line}\n\n"
f" -- {author_name} <{author_email}> {date_str}\n\n"
)
if preview:
print(
"[PREVIEW] Would prepend the following stanza to debian/changelog:\n"
f"{stanza}"
)
return
try:
with open(debian_changelog_path, "r", encoding="utf-8") as f:
existing = f.read()
except Exception as exc:
print(f"[WARN] Could not read debian/changelog: {exc}")
existing = ""
new_content = stanza + existing
with open(debian_changelog_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(f"Updated debian/changelog with version {debian_version}")
# ---------------------------------------------------------------------------
# Internal implementation (single-phase, preview or real)
# ---------------------------------------------------------------------------
def _release_impl(
pyproject_path: str = "pyproject.toml",
changelog_path: str = "CHANGELOG.md",
release_type: str = "patch",
message: Optional[str] = None,
preview: bool = False,
close: bool = False,
) -> None:
"""
Internal implementation that performs a single-phase release.
If `preview` is True:
- No files are written.
- No git commands are executed.
- Planned actions are printed.
If `preview` is False:
- Files are updated.
- Git commit, tag, and push are executed.
- If `close` is True and the current branch is not main/master,
the branch will be closed after a successful release.
"""
# 1) Determine the current version from Git tags.
current_ver = _determine_current_version()
# 2) Compute the next version.
new_ver = _bump_semver(current_ver, release_type)
new_ver_str = str(new_ver)
new_tag = new_ver.to_tag(with_prefix=True)
mode = "PREVIEW" if preview else "REAL"
print(f"Release mode: {mode}")
print(f"Current version: {current_ver}")
print(f"New version: {new_ver_str} ({release_type})")
# Determine repository root based on pyproject location
repo_root = os.path.dirname(os.path.abspath(pyproject_path))
# 2) Update files.
update_pyproject_version(pyproject_path, new_ver_str, preview=preview)
# Let update_changelog resolve or edit the message; reuse it for debian.
message = update_changelog(
changelog_path,
new_ver_str,
message=message,
preview=preview,
)
# Additional packaging files (non-fatal if missing)
flake_path = os.path.join(repo_root, "flake.nix")
update_flake_version(flake_path, new_ver_str, preview=preview)
pkgbuild_path = os.path.join(repo_root, "PKGBUILD")
update_pkgbuild_version(pkgbuild_path, new_ver_str, preview=preview)
spec_path = os.path.join(repo_root, "package-manager.spec")
update_spec_version(spec_path, new_ver_str, preview=preview)
debian_changelog_path = os.path.join(repo_root, "debian", "changelog")
# Use repo directory name as a simple default for package name
package_name = os.path.basename(repo_root) or "package-manager"
update_debian_changelog(
debian_changelog_path,
package_name=package_name,
new_version=new_ver_str,
message=message,
preview=preview,
)
# 3) Git operations: stage, commit, tag, push.
commit_msg = f"Release version {new_ver_str}"
tag_msg = message or commit_msg
try:
branch = get_current_branch() or "main"
except GitError:
branch = "main"
print(f"Releasing on branch: {branch}")
# Stage all relevant packaging files so they are included in the commit
files_to_add = [
pyproject_path,
changelog_path,
flake_path,
pkgbuild_path,
spec_path,
debian_changelog_path,
]
existing_files = [p for p in files_to_add if p and os.path.exists(p)]
if preview:
for path in existing_files:
print(f"[PREVIEW] Would run: git add {path}")
print(f'[PREVIEW] Would run: git commit -am "{commit_msg}"')
print(f'[PREVIEW] Would run: git tag -a {new_tag} -m "{tag_msg}"')
print(f"[PREVIEW] Would run: git push origin {branch}")
print("[PREVIEW] Would run: git push origin --tags")
if close and branch not in ("main", "master"):
print(
f"[PREVIEW] Would also close branch {branch} after the release "
"(--close specified and branch is not main/master)."
)
elif close:
print(
f"[PREVIEW] --close specified but current branch is {branch}; "
"no branch would be closed."
)
print("Preview completed. No changes were made.")
return
for path in existing_files:
_run_git_command(f"git add {path}")
_run_git_command(f'git commit -am "{commit_msg}"')
_run_git_command(f'git tag -a {new_tag} -m "{tag_msg}"')
_run_git_command(f"git push origin {branch}")
_run_git_command("git push origin --tags")
print(f"Release {new_ver_str} completed.")
# Optional: close the current branch after a successful release.
if close:
if branch in ("main", "master"):
print(
f"[INFO] --close specified but current branch is {branch}; "
"nothing to close."
)
return
print(
f"[INFO] Closing branch {branch} after successful release "
"(--close enabled and branch is not main/master)..."
)
try:
close_branch(name=branch, base_branch="main", cwd=".")
except Exception as exc: # pragma: no cover - defensive
print(f"[WARN] Failed to close branch {branch} automatically: {exc}")
# ---------------------------------------------------------------------------
# Public release entry point (with preview-first + confirmation logic)
# ---------------------------------------------------------------------------
def release(
pyproject_path: str = "pyproject.toml",
changelog_path: str = "CHANGELOG.md",
release_type: str = "patch",
message: Optional[str] = None,
preview: bool = False,
force: bool = False,
close: bool = False,
) -> None:
"""
High-level release entry point.
Modes:
- preview=True:
* Single-phase PREVIEW only.
* No files are changed, no git commands are executed.
* `force` is ignored in this mode.
- preview=False, force=True:
* Single-phase REAL release, no interactive preview.
* Files are changed and git commands are executed immediately.
- preview=False, force=False:
* Two-phase flow (intended default for interactive CLI use):
1) PREVIEW: dry-run, printing all planned actions.
2) Ask the user for confirmation:
"Proceed with the actual release? [y/N]: "
If confirmed, perform the REAL release.
Otherwise, abort without changes.
* In non-interactive environments (stdin not a TTY), the
confirmation step is skipped automatically and a single
REAL phase is executed, to avoid blocking on input().
The `close` flag controls whether the current branch should be
closed after a successful REAL release (only if it is not main/master).
"""
# Explicit preview mode: just do a single PREVIEW phase and exit.
if preview:
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=True,
close=close,
)
return
# Non-preview, but forced: run REAL release directly.
if force:
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=False,
close=close,
)
return
# Non-interactive environment? Skip confirmation to avoid blocking.
if not sys.stdin.isatty():
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=False,
close=close,
)
return
# Interactive two-phase flow:
print("[INFO] Running preview before actual release...\n")
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=True,
close=close,
)
# Ask for confirmation
try:
answer = input("Proceed with the actual release? [y/N]: ").strip().lower()
except (EOFError, KeyboardInterrupt):
print("\n[INFO] Release aborted (no confirmation).")
return
if answer not in ("y", "yes"):
print("Release aborted by user. No changes were made.")
return
print("\n[INFO] Running REAL release...\n")
_release_impl(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=release_type,
message=message,
preview=False,
close=close,
)
# ---------------------------------------------------------------------------
# CLI entry point for standalone use
# ---------------------------------------------------------------------------
def _parse_args(argv: Optional[list[str]] = None) -> argparse.Namespace:
parser = argparse.ArgumentParser(description="pkgmgr release helper")
parser.add_argument(
"release_type",
choices=["major", "minor", "patch"],
help="Type of release (major/minor/patch).",
)
parser.add_argument(
"-m",
"--message",
dest="message",
default=None,
help="Release message to use for changelog and tag.",
)
parser.add_argument(
"--pyproject",
dest="pyproject",
default="pyproject.toml",
help="Path to pyproject.toml (default: pyproject.toml)",
)
parser.add_argument(
"--changelog",
dest="changelog",
default="CHANGELOG.md",
help="Path to CHANGELOG.md (default: CHANGELOG.md)",
)
parser.add_argument(
"--preview",
action="store_true",
help=(
"Preview release changes without modifying files or running git. "
"This mode never executes the real release."
),
)
parser.add_argument(
"-f",
"--force",
dest="force",
action="store_true",
help=(
"Skip the interactive preview+confirmation step and run the "
"release directly."
),
)
parser.add_argument(
"-c",
"--close",
dest="close",
action="store_true",
help=(
"Close the current branch after a successful release, "
"if it is not main/master."
),
)
return parser.parse_args(argv)
if __name__ == "__main__":
args = _parse_args()
release(
pyproject_path=args.pyproject,
changelog_path=args.changelog,
release_type=args.release_type,
message=args.message,
preview=args.preview,
force=args.force,
close=args.close,
)

View File

@@ -7,7 +7,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "package-manager"
version = "0.4.0"
version = "0.7.3"
description = "Kevin's package-manager tool (pkgmgr)"
readme = "README.md"
requires-python = ">=3.11"

View File

@@ -0,0 +1,35 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "${SCRIPT_DIR}/resolve-base-image.sh"
echo "============================================================"
echo ">>> Building ONLY missing container images"
echo "============================================================"
for distro in $DISTROS; do
IMAGE="package-manager-test-$distro"
BASE_IMAGE="$(resolve_base_image "$distro")"
if docker image inspect "$IMAGE" >/dev/null 2>&1; then
echo "[build-missing] Image already exists: $IMAGE (skipping)"
continue
fi
echo
echo "------------------------------------------------------------"
echo "[build-missing] Building missing image: $IMAGE"
echo "BASE_IMAGE = $BASE_IMAGE"
echo "------------------------------------------------------------"
docker build \
--build-arg BASE_IMAGE="$BASE_IMAGE" \
-t "$IMAGE" \
.
done
echo
echo "============================================================"
echo ">>> build-missing: Done"
echo "============================================================"

View File

@@ -0,0 +1,17 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "${SCRIPT_DIR}/resolve-base-image.sh"
for distro in $DISTROS; do
base_image="$(resolve_base_image "$distro")"
echo ">>> Building test image for distro '$distro' with NO CACHE (BASE_IMAGE=$base_image)..."
docker build \
--no-cache \
--build-arg BASE_IMAGE="$base_image" \
-t "package-manager-test-$distro" \
.
done

16
scripts/build/build-image.sh Executable file
View File

@@ -0,0 +1,16 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "${SCRIPT_DIR}/resolve-base-image.sh"
for distro in $DISTROS; do
base_image="$(resolve_base_image "$distro")"
echo ">>> Building test image for distro '$distro' (BASE_IMAGE=$base_image)..."
docker build \
--build-arg BASE_IMAGE="$base_image" \
-t "package-manager-test-$distro" \
.
done

View File

@@ -0,0 +1,18 @@
#!/usr/bin/env bash
set -euo pipefail
resolve_base_image() {
local distro="$1"
case "$distro" in
arch) echo "$BASE_IMAGE_ARCH" ;;
debian) echo "$BASE_IMAGE_DEBIAN" ;;
ubuntu) echo "$BASE_IMAGE_UBUNTU" ;;
fedora) echo "$BASE_IMAGE_FEDORA" ;;
centos) echo "$BASE_IMAGE_CENTOS" ;;
*)
echo "ERROR: Unknown distro '$distro'" >&2
exit 1
;;
esac
}

View File

@@ -1,22 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
echo "[entry] Using /src as working tree for package-manager..."
cd /src
# Optional: altes Paket entfernen
echo "[entry] Removing existing 'package-manager' Arch package (if installed)..."
pacman -Rns --noconfirm package-manager || true
# Build-Owner richtig setzen (falls /src vom Host kommt)
echo "[entry] Fixing ownership of /src for user 'builder'..."
chown -R builder:builder /src
echo "[entry] Rebuilding Arch package from /src as user 'builder'..."
su builder -c "cd /src && makepkg -s --noconfirm --clean"
echo "[entry] Installing freshly built package-manager-*.pkg.tar.*..."
pacman -U --noconfirm /src/package-manager-*.pkg.tar.*
echo "[entry] Handing off to pkgmgr with args: $*"
exec pkgmgr "$@"

61
scripts/docker/entry.sh Executable file
View File

@@ -0,0 +1,61 @@
#!/usr/bin/env bash
set -euo pipefail
# ---------------------------------------------------------------------------
# Ensure Nix has access to a valid CA bundle (TLS trust store)
# ---------------------------------------------------------------------------
if [[ -z "${NIX_SSL_CERT_FILE:-}" ]]; then
if [[ -f /etc/ssl/certs/ca-certificates.crt ]]; then
# Debian/Ubuntu-style path
export NIX_SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt
echo "[docker] Using CA bundle: ${NIX_SSL_CERT_FILE}"
elif [[ -f /etc/pki/tls/certs/ca-bundle.crt ]]; then
# Fedora/RHEL/CentOS-style path
export NIX_SSL_CERT_FILE=/etc/pki/tls/certs/ca-bundle.crt
echo "[docker] Using CA bundle: ${NIX_SSL_CERT_FILE}"
else
echo "[docker] WARNING: No CA bundle found for Nix (NIX_SSL_CERT_FILE not set)."
echo "[docker] HTTPS access for Nix flakes may fail."
fi
fi
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
echo "[docker] Starting package-manager container"
# Distro info for logging
if [[ -f /etc/os-release ]]; then
# shellcheck disable=SC1091
. /etc/os-release
echo "[docker] Detected distro: ${ID:-unknown} (like: ${ID_LIKE:-})"
fi
# Always use /src (mounted from host) as working directory
echo "[docker] Using /src as working directory"
cd /src
# ------------------------------------------------------------
# DEV mode: build/install package-manager from current /src
# ------------------------------------------------------------
if [[ "${PKGMGR_DEV:-0}" == "1" ]]; then
echo "[docker] DEV mode enabled (PKGMGR_DEV=1)"
echo "[docker] Rebuilding package-manager from /src via scripts/installation/run-package.sh..."
if [[ -x scripts/installation/run-package.sh ]]; then
bash scripts/installation/run-package.sh
else
echo "[docker] ERROR: scripts/installation/run-package.sh not found or not executable"
exit 1
fi
fi
# ------------------------------------------------------------
# Hand-off to pkgmgr / arbitrary command
# ------------------------------------------------------------
if [[ $# -eq 0 ]]; then
echo "[docker] No arguments provided. Showing pkgmgr help..."
exec pkgmgr --help
else
echo "[docker] Executing command: $*"
exec "$@"
fi

222
scripts/init-nix.sh Normal file → Executable file
View File

@@ -1,44 +1,208 @@
#!/usr/bin/env bash
set -euo pipefail
echo ">>> Initializing Nix environment for package-manager..."
echo "[init-nix] Starting Nix initialization..."
# 1. /nix store
if [ ! -d /nix ]; then
echo ">>> Creating /nix store directory"
mkdir -m 0755 /nix
chown root:root /nix
# ---------------------------------------------------------------------------
# Helper: detect whether we are inside a container (Docker/Podman/etc.)
# ---------------------------------------------------------------------------
is_container() {
# Docker / Podman markers
if [[ -f /.dockerenv ]] || [[ -f /run/.containerenv ]]; then
return 0
fi
# cgroup hints
if grep -qiE 'docker|container|podman|lxc' /proc/1/cgroup 2>/dev/null; then
return 0
fi
# Environment variable used by some runtimes
if [[ -n "${container:-}" ]]; then
return 0
fi
return 1
}
# ---------------------------------------------------------------------------
# Helper: ensure Nix binaries are on PATH (multi-user or single-user)
# ---------------------------------------------------------------------------
ensure_nix_on_path() {
# Multi-user profile (daemon install)
if [[ -x /nix/var/nix/profiles/default/bin/nix ]]; then
export PATH="/nix/var/nix/profiles/default/bin:${PATH}"
fi
# Single-user profile (current user)
if [[ -x "${HOME}/.nix-profile/bin/nix" ]]; then
export PATH="${HOME}/.nix-profile/bin:${PATH}"
fi
# Single-user profile for dedicated "nix" user (container case)
if [[ -x /home/nix/.nix-profile/bin/nix ]]; then
export PATH="/home/nix/.nix-profile/bin:${PATH}"
fi
}
# ---------------------------------------------------------------------------
# Fast path: Nix already available
# ---------------------------------------------------------------------------
if command -v nix >/dev/null 2>&1; then
echo "[init-nix] Nix already available on PATH: $(command -v nix)"
exit 0
fi
# 2. Enable nix-daemon if available
if command -v systemctl >/dev/null 2>&1 && systemctl list-unit-files | grep -q nix-daemon.service; then
echo ">>> Enabling nix-daemon.service"
systemctl enable --now nix-daemon.service 2>/dev/null || true
ensure_nix_on_path
if command -v nix >/dev/null 2>&1; then
echo "[init-nix] Nix found after adjusting PATH: $(command -v nix)"
exit 0
fi
echo "[init-nix] Nix not found, starting installation logic..."
IN_CONTAINER=0
if is_container; then
IN_CONTAINER=1
echo "[init-nix] Detected container environment."
else
echo ">>> Warning: nix-daemon.service not found or systemctl not available."
echo "[init-nix] No container detected."
fi
# 3. Ensure nix-users group
if ! getent group nix-users >/dev/null 2>&1; then
echo ">>> Creating nix-users group"
groupadd -r nix-users 2>/dev/null || true
fi
# ---------------------------------------------------------------------------
# Container + root: install Nix as dedicated "nix" user (single-user)
# ---------------------------------------------------------------------------
if [[ "${IN_CONTAINER}" -eq 1 && "${EUID:-0}" -eq 0 ]]; then
echo "[init-nix] Running as root inside a container using dedicated 'nix' user."
# 4. Add users to nix-users (best-effort)
if command -v loginctl >/dev/null 2>&1; then
for user in $(loginctl list-users | awk 'NR>1 {print $2}'); do
if id "$user" >/dev/null 2>&1; then
echo ">>> Adding user '$user' to nix-users"
usermod -aG nix-users "$user" 2>/dev/null || true
# Ensure nixbld group (required by Nix)
if ! getent group nixbld >/dev/null 2>&1; then
echo "[init-nix] Creating group 'nixbld'..."
groupadd -r nixbld
fi
# Ensure Nix build users (nixbld1..nixbld10) as members of nixbld
for i in $(seq 1 10); do
if ! id "nixbld$i" >/dev/null 2>&1; then
echo "[init-nix] Creating build user nixbld$i..."
# -r: system account, -g: primary group, -G: supplementary (ensures membership is listed)
useradd -r -g nixbld -G nixbld -s /usr/sbin/nologin "nixbld$i"
fi
done
elif command -v logname >/dev/null 2>&1; then
USERNAME="$(logname 2>/dev/null || true)"
if [ -n "$USERNAME" ] && id "$USERNAME" >/dev/null 2>&1; then
echo ">>> Adding user '$USERNAME' to nix-users"
usermod -aG nix-users "$USERNAME" 2>/dev/null || true
# Ensure "nix" user (home at /home/nix)
if ! id nix >/dev/null 2>&1; then
echo "[init-nix] Creating user 'nix'..."
useradd -m -r -g nixbld -s /usr/bin/bash nix
fi
# Create /nix directory and hand it to nix user (prevents installer sudo prompt)
if [[ ! -d /nix ]]; then
echo "[init-nix] Creating /nix with owner nix:nixbld..."
mkdir -m 0755 /nix
chown nix:nixbld /nix
fi
# Run Nix single-user installer as "nix"
echo "[init-nix] Installing Nix as user 'nix' (single-user, --no-daemon)..."
if command -v sudo >/dev/null 2>&1; then
sudo -u nix bash -lc 'sh <(curl -L https://nixos.org/nix/install) --no-daemon'
else
su - nix -c 'sh <(curl -L https://nixos.org/nix/install) --no-daemon'
fi
# After installation, expose nix to root via PATH and symlink
ensure_nix_on_path
if [[ -x /home/nix/.nix-profile/bin/nix ]]; then
if [[ ! -e /usr/local/bin/nix ]]; then
echo "[init-nix] Creating /usr/local/bin/nix symlink -> /home/nix/.nix-profile/bin/nix"
ln -s /home/nix/.nix-profile/bin/nix /usr/local/bin/nix
fi
fi
ensure_nix_on_path
if command -v nix >/dev/null 2>&1; then
echo "[init-nix] Nix successfully installed (container mode) at: $(command -v nix)"
else
echo "[init-nix] WARNING: Nix installation finished in container, but 'nix' is still not on PATH."
fi
# Optionally add PATH hints to /etc/profile (best effort)
if [[ -w /etc/profile ]]; then
if ! grep -q 'Nix profiles' /etc/profile 2>/dev/null; then
cat <<'EOF' >> /etc/profile
# Nix profiles (added by package-manager init-nix.sh)
if [ -d /nix/var/nix/profiles/default/bin ]; then
PATH="/nix/var/nix/profiles/default/bin:$PATH"
fi
if [ -d "$HOME/.nix-profile/bin" ]; then
PATH="$HOME/.nix-profile/bin:$PATH"
fi
EOF
echo "[init-nix] Appended Nix PATH setup to /etc/profile (container mode)."
fi
fi
echo "[init-nix] Nix initialization complete (container root mode)."
exit 0
fi
# ---------------------------------------------------------------------------
# Non-container or non-root container: normal installer paths
# ---------------------------------------------------------------------------
if [[ "${IN_CONTAINER}" -eq 0 ]]; then
# Real host
if command -v systemctl >/dev/null 2>&1; then
echo "[init-nix] Host with systemd using multi-user install (--daemon)."
sh <(curl -L https://nixos.org/nix/install) --daemon
else
if [[ "${EUID:-0}" -eq 0 ]]; then
echo "[init-nix] WARNING: Running as root without systemd on host."
echo "[init-nix] Falling back to single-user install (--no-daemon), but this is not recommended."
sh <(curl -L https://nixos.org/nix/install) --no-daemon
else
echo "[init-nix] Non-root host without systemd using single-user install (--no-daemon)."
sh <(curl -L https://nixos.org/nix/install) --no-daemon
fi
fi
else
# Container, but not root (rare)
echo "[init-nix] Container as non-root user using single-user install (--no-daemon)."
sh <(curl -L https://nixos.org/nix/install) --no-daemon
fi
# ---------------------------------------------------------------------------
# After installation: fix PATH (runtime + shell profiles)
# ---------------------------------------------------------------------------
ensure_nix_on_path
if ! command -v nix >/dev/null 2>&1; then
echo "[init-nix] WARNING: Nix installation finished, but 'nix' is still not on PATH."
echo "[init-nix] You may need to source your shell profile manually."
exit 0
fi
echo "[init-nix] Nix successfully installed at: $(command -v nix)"
# Update global /etc/profile if writable (helps especially on minimal systems)
if [[ -w /etc/profile ]]; then
if ! grep -q 'Nix profiles' /etc/profile 2>/dev/null; then
cat <<'EOF' >> /etc/profile
# Nix profiles (added by package-manager init-nix.sh)
if [ -d /nix/var/nix/profiles/default/bin ]; then
PATH="/nix/var/nix/profiles/default/bin:$PATH"
fi
if [ -d "$HOME/.nix-profile/bin" ]; then
PATH="$HOME/.nix-profile/bin:$PATH"
fi
EOF
echo "[init-nix] Appended Nix PATH setup to /etc/profile"
fi
fi
echo ">>> Nix initialization complete."
echo ">>> You may need to log out and log back in to activate group membership."
echo "[init-nix] Nix initialization complete."

View File

@@ -0,0 +1,54 @@
#!/usr/bin/env bash
set -euo pipefail
# ------------------------------------------------------------
# aur-builder-setup.sh
#
# Setup helper for an 'aur_builder' user and yay on Arch-based
# systems. Intended for host usage and can also be used in
# containers if desired.
# ------------------------------------------------------------
echo "[aur-builder-setup] Checking for pacman..."
if ! command -v pacman >/dev/null 2>&1; then
echo "[aur-builder-setup] pacman not found this is not an Arch-based system. Skipping."
exit 0
fi
if [[ "${EUID:-0}" -ne 0 ]]; then
ROOT_CMD="sudo"
else
ROOT_CMD=""
fi
echo "[aur-builder-setup] Installing base-devel, git, sudo..."
${ROOT_CMD} pacman -Syu --noconfirm
${ROOT_CMD} pacman -S --needed --noconfirm base-devel git sudo
echo "[aur-builder-setup] Ensuring aur_builder group/user..."
if ! getent group aur_builder >/dev/null 2>&1; then
${ROOT_CMD} groupadd -r aur_builder
fi
if ! id -u aur_builder >/dev/null 2>&1; then
${ROOT_CMD} useradd -m -r -g aur_builder -s /bin/bash aur_builder
fi
echo "[aur-builder-setup] Configuring sudoers for aur_builder..."
${ROOT_CMD} bash -c "echo '%aur_builder ALL=(ALL) NOPASSWD: /usr/bin/pacman' > /etc/sudoers.d/aur_builder"
${ROOT_CMD} chmod 0440 /etc/sudoers.d/aur_builder
if command -v sudo >/dev/null 2>&1; then
RUN_AS_AUR=(sudo -u aur_builder bash -lc)
else
RUN_AS_AUR=(su - aur_builder -c)
fi
echo "[aur-builder-setup] Ensuring yay is installed for aur_builder..."
if ! "${RUN_AS_AUR[@]}" 'command -v yay >/dev/null 2>&1'; then
"${RUN_AS_AUR[@]}" 'cd ~ && rm -rf yay && git clone https://aur.archlinux.org/yay.git && cd yay && makepkg -si --noconfirm'
else
echo "[aur-builder-setup] yay already installed."
fi
echo "[aur-builder-setup] Done."

View File

@@ -0,0 +1,30 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
echo "[arch/dependencies] Installing Arch build dependencies..."
pacman -Syu --noconfirm
pacman -S --noconfirm --needed \
base-devel \
git \
rsync \
curl \
ca-certificates \
xz
pacman -Scc --noconfirm
# Always run AUR builder setup for Arch
AUR_SETUP="${SCRIPT_DIR}/aur-builder-setup.sh"
if [[ ! -x "${AUR_SETUP}" ]]; then
echo "[arch/dependencies] ERROR: AUR builder setup script not found or not executable: ${AUR_SETUP}"
exit 1
fi
echo "[arch/dependencies] Running AUR builder setup..."
bash "${AUR_SETUP}"
echo "[arch/dependencies] Done."

View File

@@ -0,0 +1,19 @@
#!/usr/bin/env bash
set -euo pipefail
echo "[arch/package] Building Arch package (makepkg --nodeps)..."
if id aur_builder >/dev/null 2>&1; then
echo "[arch/package] Using 'aur_builder' user for makepkg..."
chown -R aur_builder:aur_builder "$(pwd)"
su aur_builder -c "cd '$(pwd)' && rm -f package-manager-*.pkg.tar.* && makepkg --noconfirm --clean --nodeps"
else
echo "[arch/package] WARNING: user 'aur_builder' not found, running makepkg as current user..."
rm -f package-manager-*.pkg.tar.*
makepkg --noconfirm --clean --nodeps
fi
echo "[arch/package] Installing generated Arch package..."
pacman -U --noconfirm package-manager-*.pkg.tar.*
echo "[arch/package] Done."

View File

@@ -0,0 +1,20 @@
#!/usr/bin/env bash
set -euo pipefail
echo "[centos/dependencies] Installing CentOS build dependencies..."
dnf -y update
dnf -y install \
git \
rsync \
rpm-build \
make \
gcc \
bash \
curl-minimal \
ca-certificates \
xz
dnf clean all
echo "[centos/dependencies] Done."

View File

@@ -0,0 +1,46 @@
#!/usr/bin/env bash
set -euo pipefail
echo "[centos/package] Setting up rpmbuild directories..."
mkdir -p /root/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
echo "[centos/package] Extracting version from package-manager.spec..."
version="$(grep -E '^Version:' package-manager.spec | awk '{print $2}')"
if [[ -z "${version}" ]]; then
echo "ERROR: Version missing!"
exit 1
fi
srcdir="package-manager-${version}"
echo "[centos/package] Preparing source tree: ${srcdir}"
rm -rf "/tmp/${srcdir}"
mkdir -p "/tmp/${srcdir}"
cp -a . "/tmp/${srcdir}/"
echo "[centos/package] Creating source tarball..."
tar czf "/root/rpmbuild/SOURCES/${srcdir}.tar.gz" -C /tmp "${srcdir}"
echo "[centos/package] Copying SPEC..."
cp package-manager.spec /root/rpmbuild/SPECS/
echo "[centos/package] Running rpmbuild..."
cd /root/rpmbuild/SPECS
rpmbuild -bb package-manager.spec
echo "[centos/package] Installing generated RPM (local, offline, forced reinstall)..."
rpm_path="$(find /root/rpmbuild/RPMS -name 'package-manager-*.rpm' | head -n1)"
if [[ -z "${rpm_path}" ]]; then
echo "ERROR: RPM not found!"
exit 1
fi
# ------------------------------------------------------------
# Forced reinstall, always overwrite old version
# ------------------------------------------------------------
echo "[centos/package] Forcing reinstall via rpm -Uvh --replacepkgs --force"
rpm -Uvh --replacepkgs --force "${rpm_path}"
# Keep structure: remove temp directory afterwards
rm -rf "/tmp/${srcdir}"
echo "[centos/package] Done."

View File

@@ -0,0 +1,20 @@
#!/usr/bin/env bash
set -euo pipefail
echo "[debian/dependencies] Installing Debian build dependencies..."
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
build-essential \
debhelper \
dpkg-dev \
git \
rsync \
bash \
curl \
ca-certificates \
xz-utils
rm -rf /var/lib/apt/lists/*
echo "[debian/dependencies] Done."

Some files were not shown because too many files have changed in this diff Show More