Compare commits

...

11 Commits

Author SHA1 Message Date
Kevin Veen-Birkenbach
59d0355b91 Release version 0.6.0 2025-12-09 05:59:58 +01:00
Kevin Veen-Birkenbach
da9d5cfa6b Fix container tests, unify RPM install path, and ensure Nix TLS truststore detection
Changes included:
• GitHub Actions workflow: rename job from 'test-unit' to 'test-container' to match intent.
• RPM packaging: replace %{_libdir}/package-manager with a fixed /usr/lib/package-manager
  to avoid lib/lib64 divergence on CentOS and ensure pkgmgr + Nix flake resolution works
  consistently across distros.
• Docker entrypoint: add automatic CA-bundle detection and set NIX_SSL_CERT_FILE to fix
  TLS issues on CentOS ('unable to get local issuer certificate') when Nix fetches flake
  inputs.

These updates stabilize container-based tests and unify the runtime environment
for Fedora, CentOS, and other distributions.

Reference:
ChatGPT conversation: https://chatgpt.com/share/6937aa72-d33c-800f-a63f-c353e92de6b3
2025-12-09 05:50:08 +01:00
Kevin Veen-Birkenbach
f9943fafae Refactor container build and installation pipeline to use configurable Makefile parameters (e.g. DISTROS, base images) and propagate them through all build, install, and test scripts 2025-12-09 05:31:55 +01:00
Kevin Veen-Birkenbach
7d73007181 Release version 0.5.1 2025-12-09 01:21:31 +01:00
Kevin Veen-Birkenbach
c8462fefa4 Release version 0.5.0 2025-12-09 00:44:16 +01:00
Kevin Veen-Birkenbach
00a1f373ce Merge branch 'feature/config_v2.0' 2025-12-09 00:29:19 +01:00
Kevin Veen-Birkenbach
9f9f2e68c0 Release version 0.4.3 2025-12-09 00:29:08 +01:00
Kevin Veen-Birkenbach
d25dcb05e4 Merge branch 'feature/branch_close' 2025-12-09 00:03:56 +01:00
Kevin Veen-Birkenbach
e135d39710 Release version 0.4.2 2025-12-09 00:03:46 +01:00
Kevin Veen-Birkenbach
76b7f84989 Release version 0.4.1 2025-12-08 23:20:28 +01:00
Kevin Veen-Birkenbach
1b53263f87 Release version 0.4.0 2025-12-08 23:02:43 +01:00
55 changed files with 2233 additions and 824 deletions

25
.github/workflows/test-container.yml vendored Normal file
View File

@@ -0,0 +1,25 @@
name: Test Distribution Containers
on:
push:
branches:
- main
- master
- develop
- "*"
pull_request:
jobs:
test-container:
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Show Docker version
run: docker version
- name: Run container tests
run: make test-container

11
.gitignore vendored
View File

@@ -15,7 +15,7 @@ venv/
# Build artifacts
dist/
build/
build/*
*.egg-info/
# Editor files
@@ -31,4 +31,11 @@ Thumbs.db
# Ignore logs
*.log
package-manager-*
package-manager-*
# debian
debian/package-manager/
debian/debhelper-build-stamp
debian/files
debian/.debhelper/
debian/package-manager.substvars

View File

@@ -1,3 +1,37 @@
## [0.6.0] - 2025-12-09
* Expose DISTROS and BASE_IMAGE_* variables as exported Makefile environment variables so all build and test commands can consume them dynamically. By exporting these values, every Make target (e.g., build, build-no-cache, build-missing, test-container, test-unit, test-e2e) and every delegated script in scripts/build/ and scripts/test/ now receives a consistent view of the supported distributions and their base container images. This change removes duplicated definitions across scripts, ensures reproducible builds, and allows build tooling to react automatically when new distros or base images are added to the Makefile.
## [0.5.1] - 2025-12-09
* Refine pkgmgr release CLI close wiring and integration tests for --close flag (ChatGPT: https://chatgpt.com/share/69376b4e-8440-800f-9d06-535ec1d7a40e)
## [0.5.0] - 2025-12-09
* Add pkgmgr branch close subcommand, extend CLI parser wiring, and add unit tests for branch handling and version version-selection logic (see ChatGPT conversation: https://chatgpt.com/share/693762a3-9ea8-800f-a640-bc78170953d1)
## [0.4.3] - 2025-12-09
* Implement current-directory repository selection for release and proxy commands, unify selection semantics across CLI layers, extend release workflow with --close, integrate branch closing logic, fix wiring for get_repo_identifier/get_repo_dir, update packaging files (PKGBUILD, spec, flake.nix, pyproject), and add comprehensive unit/e2e tests for release and branch commands (see ChatGPT conversation: https://chatgpt.com/share/69375cfe-9e00-800f-bd65-1bd5937e1696)
## [0.4.2] - 2025-12-09
* Wire pkgmgr release CLI to new helper and add unit tests (see ChatGPT conversation: https://chatgpt.com/share/69374f09-c760-800f-92e4-5b44a4510b62)
## [0.4.1] - 2025-12-08
* Add branch close subcommand and integrate release close/editor flow (ChatGPT: https://chatgpt.com/share/69374f09-c760-800f-92e4-5b44a4510b62)
## [0.4.0] - 2025-12-08
* Add branch closing helper and --close flag to release command, including CLI wiring and tests (see https://chatgpt.com/share/69374aec-74ec-800f-bde3-5d91dfdb9b91)
## [0.3.0] - 2025-12-08
* Massive refactor and feature expansion:
@@ -10,7 +44,6 @@
- Expanded E2E tests for list, proxy, and selection logic
Konversation: https://chatgpt.com/share/693745c3-b8d8-800f-aa29-c8481a2ffae1
## [0.2.0] - 2025-12-08
* Add preview-first release workflow and extended packaging support (see ChatGPT conversation: https://chatgpt.com/share/693722b4-af9c-800f-bccc-8a4036e99630)

View File

@@ -4,87 +4,6 @@
ARG BASE_IMAGE=archlinux:latest
FROM ${BASE_IMAGE}
# ------------------------------------------------------------
# System base + conditional package tool installation
#
# Important:
# - We do NOT install Nix directly here via curl.
# - Nix is installed/initialized by init-nix.sh, which is invoked
# from the system packaging hooks (Arch .install, Debian postinst,
# RPM %post).
# ------------------------------------------------------------
RUN set -e; \
if [ -f /etc/os-release ]; then . /etc/os-release; else echo "No /etc/os-release found" && exit 1; fi; \
echo "Detected base image: ${ID:-unknown} (like: ${ID_LIKE:-})"; \
\
if [ "$ID" = "arch" ]; then \
pacman -Syu --noconfirm && \
pacman -S --noconfirm --needed \
base-devel \
git \
rsync \
curl \
ca-certificates \
xz && \
pacman -Scc --noconfirm; \
elif [ "$ID" = "debian" ]; then \
apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
build-essential \
debhelper \
dpkg-dev \
git \
rsync \
bash \
curl \
ca-certificates \
xz-utils && \
rm -rf /var/lib/apt/lists/*; \
elif [ "$ID" = "ubuntu" ]; then \
apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
build-essential \
debhelper \
dpkg-dev \
git \
tzdata \
lsb-release \
rsync \
bash \
curl \
ca-certificates \
xz-utils && \
rm -rf /var/lib/apt/lists/*; \
elif [ "$ID" = "fedora" ]; then \
dnf -y update && \
dnf -y install \
git \
rsync \
rpm-build \
make \
gcc \
bash \
curl \
ca-certificates \
xz && \
dnf clean all; \
elif [ "$ID" = "centos" ]; then \
dnf -y update && \
dnf -y install \
git \
rsync \
rpm-build \
make \
gcc \
bash \
curl-minimal \
ca-certificates \
xz && \
dnf clean all; \
else \
echo "Unsupported base image: ${ID}" && exit 1; \
fi
# ------------------------------------------------------------
# Nix environment defaults
#
@@ -96,94 +15,38 @@ ENV NIX_CONFIG="experimental-features = nix-command flakes"
# ------------------------------------------------------------
# Unprivileged user for Arch package build (makepkg)
# ------------------------------------------------------------
RUN useradd -m builder || true
RUN useradd -m aur_builder || true
# ------------------------------------------------------------
# Copy scripts and install distro dependencies
# ------------------------------------------------------------
WORKDIR /build
# Copy only scripts first so dependency installation can run early
COPY scripts/ scripts/
RUN find scripts -type f -name '*.sh' -exec chmod +x {} \;
# Install distro-specific build dependencies (and AUR builder on Arch)
RUN scripts/installation/run-dependencies.sh
# ------------------------------------------------------------
# Select distro-specific Docker entrypoint
# ------------------------------------------------------------
# Docker entrypoint (distro-agnostic, nutzt run-package.sh)
# ------------------------------------------------------------
COPY scripts/docker/entry.sh /usr/local/bin/docker-entry.sh
RUN chmod +x /usr/local/bin/docker-entry.sh
# ------------------------------------------------------------
# Build and install distro-native package-manager package
#
# - Arch: PKGBUILD -> pacman -U
# - Debian: debhelper -> dpkg-buildpackage -> apt install ./package-manager_*.deb
# - Ubuntu: same as Debian
# - Fedora: rpmbuild -> dnf/dnf5/yum install package-manager-*.rpm
# - CentOS: rpmbuild -> dnf/yum install package-manager-*.rpm
#
# Nix is NOT manually installed here; it is handled by init-nix.sh.
# via Makefile `install` target (calls scripts/installation/run-package.sh)
# ------------------------------------------------------------
WORKDIR /build
COPY . .
RUN find scripts -type f -name '*.sh' -exec chmod +x {} \;
RUN set -e; \
. /etc/os-release; \
if [ "$ID" = "arch" ]; then \
echo 'Building Arch package (makepkg --nodeps)...'; \
chown -R builder:builder /build; \
su builder -c "cd /build && rm -f package-manager-*.pkg.tar.* && makepkg --noconfirm --clean --nodeps"; \
\
echo 'Installing generated Arch package...'; \
pacman -U --noconfirm package-manager-*.pkg.tar.*; \
elif [ "$ID" = "debian" ] || [ "$ID" = "ubuntu" ]; then \
echo 'Building Debian/Ubuntu package...'; \
dpkg-buildpackage -us -uc -b; \
\
echo 'Installing generated DEB package...'; \
apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y ./../package-manager_*.deb && \
rm -rf /var/lib/apt/lists/*; \
elif [ "$ID" = "fedora" ] || [ "$ID" = "centos" ]; then \
echo 'Setting up rpmbuild dirs...'; \
mkdir -p /root/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}; \
\
echo "Extracting version from package-manager.spec..."; \
version=$(grep -E '^Version:' /build/package-manager.spec | awk '{print $2}'); \
if [ -z "$version" ]; then echo 'ERROR: Version missing!' && exit 1; fi; \
srcdir="package-manager-${version}"; \
\
echo "Preparing source tree for RPM: $srcdir"; \
rm -rf "/tmp/$srcdir"; \
mkdir -p "/tmp/$srcdir"; \
cp -a /build/. "/tmp/$srcdir/"; \
\
echo "Creating source tarball: /root/rpmbuild/SOURCES/$srcdir.tar.gz"; \
tar czf "/root/rpmbuild/SOURCES/$srcdir.tar.gz" -C /tmp "$srcdir"; \
\
echo 'Copying SPEC...'; \
cp /build/package-manager.spec /root/rpmbuild/SPECS/; \
\
echo 'Running rpmbuild...'; \
cd /root/rpmbuild/SPECS && rpmbuild -bb package-manager.spec; \
\
echo 'Installing generated RPM (local, offline)...'; \
rpm_path=$(find /root/rpmbuild/RPMS -name "package-manager-*.rpm" | head -n1); \
if [ -z "$rpm_path" ]; then echo 'ERROR: RPM not found!' && exit 1; fi; \
\
if command -v dnf5 >/dev/null 2>&1; then \
echo 'Using dnf5 to install local RPM (no remote repos)...'; \
if ! dnf5 install -y --disablerepo='*' "$rpm_path"; then \
echo 'dnf5 failed, falling back to rpm -i --nodeps'; \
rpm -i --nodeps "$rpm_path"; \
fi; \
elif command -v dnf >/dev/null 2>&1; then \
echo 'Using dnf to install local RPM (no remote repos)...'; \
if ! dnf install -y --disablerepo='*' "$rpm_path"; then \
echo 'dnf failed, falling back to rpm -i --nodeps'; \
rpm -i --nodeps "$rpm_path"; \
fi; \
elif command -v yum >/dev/null 2>&1; then \
echo 'Using yum to install local RPM (no remote repos)...'; \
if ! yum localinstall -y --disablerepo='*' "$rpm_path"; then \
echo 'yum failed, falling back to rpm -i --nodeps'; \
rpm -i --nodeps "$rpm_path"; \
fi; \
else \
echo 'No dnf/dnf5/yum found, falling back to rpm -i --nodeps...'; \
rpm -i --nodeps "$rpm_path"; \
fi; \
\
rm -rf "/tmp/$srcdir"; \
else \
echo "Unsupported distro: ${ID}"; \
exit 1; \
fi; \
echo "Building and installing package-manager via make install..."; \
make install; \
rm -rf /build
# ------------------------------------------------------------
@@ -191,8 +54,5 @@ RUN set -e; \
# ------------------------------------------------------------
WORKDIR /src
COPY scripts/docker-entry-dev.sh /usr/local/bin/docker-entry-dev.sh
RUN chmod +x /usr/local/bin/docker-entry-dev.sh
ENTRYPOINT ["/usr/local/bin/docker-entry-dev.sh"]
CMD ["--help"]
ENTRYPOINT ["/usr/local/bin/docker-entry.sh"]
CMD ["pkgmgr", "--help"]

278
Makefile
View File

@@ -1,5 +1,6 @@
.PHONY: install setup uninstall aur_builder_setup \
test build build-no-cache test-unit test-e2e test-integration
.PHONY: install setup uninstall \
test build build-no-cache test-unit test-e2e test-integration \
test-container
# ------------------------------------------------------------
# Local Nix cache directories in the repo
@@ -9,267 +10,72 @@ NIX_CACHE_VOLUME := pkgmgr_nix_cache
# ------------------------------------------------------------
# Distro list and base images
# (kept for documentation/reference; actual build logic is in scripts/build)
# ------------------------------------------------------------
DISTROS := arch debian ubuntu fedora centos
BASE_IMAGE_ARCH := archlinux:latest
BASE_IMAGE_DEBIAN := debian:stable-slim
BASE_IMAGE_UBUNTU := ubuntu:latest
BASE_IMAGE_FEDORA := fedora:latest
BASE_IMAGE_CENTOS := quay.io/centos/centos:stream9
BASE_IMAGE_arch := archlinux:latest
BASE_IMAGE_debian := debian:stable-slim
BASE_IMAGE_ubuntu := ubuntu:latest
BASE_IMAGE_fedora := fedora:latest
BASE_IMAGE_centos := quay.io/centos/centos:stream9
# Helper to echo which image is used for which distro (purely informational)
define echo_build_info
@echo "Building image for distro '$(1)' with base image '$(2)'..."
endef
# Make them available in scripts
export DISTROS
export BASE_IMAGE_ARCH
export BASE_IMAGE_DEBIAN
export BASE_IMAGE_UBUNTU
export BASE_IMAGE_FEDORA
export BASE_IMAGE_CENTOS
# ------------------------------------------------------------
# PKGMGR setup (wrapper)
# PKGMGR setup (developer wrapper -> scripts/installation/main.sh)
# ------------------------------------------------------------
setup: install
@echo "Running pkgmgr setup via main.py..."
@if [ -x "$$HOME/.venvs/pkgmgr/bin/python" ]; then \
echo "Using virtualenv Python at $$HOME/.venvs/pkgmgr/bin/python"; \
"$$HOME/.venvs/pkgmgr/bin/python" main.py install; \
else \
echo "Virtualenv not found, falling back to system python3"; \
python3 main.py install; \
fi
setup:
@bash scripts/installation/main.sh
# ------------------------------------------------------------
# Docker build targets: build all images
# Docker build targets (delegated to scripts/build)
# ------------------------------------------------------------
build-no-cache:
@for distro in $(DISTROS); do \
case "$$distro" in \
arch) base_image="$(BASE_IMAGE_arch)" ;; \
debian) base_image="$(BASE_IMAGE_debian)" ;; \
ubuntu) base_image="$(BASE_IMAGE_ubuntu)" ;; \
fedora) base_image="$(BASE_IMAGE_fedora)" ;; \
centos) base_image="$(BASE_IMAGE_centos)" ;; \
*) echo "Unknown distro '$$distro'" >&2; exit 1 ;; \
esac; \
echo "Building test image 'package-manager-test-$$distro' with no cache (BASE_IMAGE=$$base_image)..."; \
docker build --no-cache \
--build-arg BASE_IMAGE="$$base_image" \
-t "package-manager-test-$$distro" . || exit $$?; \
done
@bash scripts/build/build-image-no-cache.sh
build:
@for distro in $(DISTROS); do \
case "$$distro" in \
arch) base_image="$(BASE_IMAGE_arch)" ;; \
debian) base_image="$(BASE_IMAGE_debian)" ;; \
ubuntu) base_image="$(BASE_IMAGE_ubuntu)" ;; \
fedora) base_image="$(BASE_IMAGE_fedora)" ;; \
centos) base_image="$(BASE_IMAGE_centos)" ;; \
*) echo "Unknown distro '$$distro'" >&2; exit 1 ;; \
esac; \
echo "Building test image 'package-manager-test-$$distro' (BASE_IMAGE=$$base_image)..."; \
docker build \
--build-arg BASE_IMAGE="$$base_image" \
-t "package-manager-test-$$distro" . || exit $$?; \
done
build-arch:
@base_image="$(BASE_IMAGE_arch)"; \
echo "Building test image 'package-manager-test-arch' (BASE_IMAGE=$$base_image)..."; \
docker build \
--build-arg BASE_IMAGE="$$base_image" \
-t "package-manager-test-arch" . || exit $$?;
@bash scripts/build/build-image.sh
# ------------------------------------------------------------
# Test targets
# Test targets (delegated to scripts/test)
# ------------------------------------------------------------
# Unit tests: only in Arch container (fastest feedback), via Nix devShell
test-unit: build-arch
@echo "============================================================"
@echo ">>> Running UNIT tests in Arch container (via Nix devShell)"
@echo "============================================================"
docker run --rm \
-v "$$(pwd):/src" \
--workdir /src \
--entrypoint bash \
"package-manager-test-arch" \
-c '\
set -e; \
if [ -f /etc/os-release ]; then . /etc/os-release; fi; \
echo "Detected container distro: $${ID:-unknown} (like: $${ID_LIKE:-})"; \
echo "Running Python unit tests (tests/unit) via nix develop..."; \
git config --global --add safe.directory /src || true; \
cd /src; \
nix develop .#default --no-write-lock-file -c \
python -m unittest discover \
-s tests/unit \
-t /src \
-p "test_*.py"; \
'
test-unit:
@bash scripts/test/test-unit.sh
# Integration tests: also in Arch container, via Nix devShell
test-integration: build-arch
@echo "============================================================"
@echo ">>> Running INTEGRATION tests in Arch container (via Nix devShell)"
@echo "============================================================"
docker run --rm \
-v "$$(pwd):/src" \
--workdir /src \
--entrypoint bash \
"package-manager-test-arch" \
-c '\
set -e; \
if [ -f /etc/os-release ]; then . /etc/os-release; fi; \
echo "Detected container distro: $${ID:-unknown} (like: $${ID_LIKE:-})"; \
echo "Running Python integration tests (tests/integration) via nix develop..."; \
git config --global --add safe.directory /src || true; \
cd /src; \
nix develop .#default --no-write-lock-file -c \
python -m unittest discover \
-s tests/integration \
-t /src \
-p "test_*.py"; \
'
test-integration:
@bash scripts/test/test-integration.sh
# End-to-end tests: run in all distros via Nix devShell (tests/e2e)
test-e2e: build
@echo "Ensuring Docker Nix volumes exist (auto-created if missing)..."
@echo "Running E2E tests inside Nix devShell with cached store for all distros: $(DISTROS)"
test-e2e:
@bash scripts/test/test-e2e.sh
@for distro in $(DISTROS); do \
echo "============================================================"; \
echo ">>> Running E2E tests in container for distro: $$distro"; \
echo "============================================================"; \
# Only for Arch: mount /nix as volume, for others use image-installed Nix \
if [ "$$distro" = "arch" ]; then \
NIX_STORE_MOUNT="-v $(NIX_STORE_VOLUME):/nix"; \
else \
NIX_STORE_MOUNT=""; \
fi; \
docker run --rm \
-v "$$(pwd):/src" \
$$NIX_STORE_MOUNT \
-v "$(NIX_CACHE_VOLUME):/root/.cache/nix" \
--workdir /src \
--entrypoint bash \
"package-manager-test-$$distro" \
-c '\
set -e; \
if [ -f /etc/os-release ]; then . /etc/os-release; fi; \
echo "Detected container distro: $${ID:-unknown} (like: $${ID_LIKE:-})"; \
echo "Preparing Nix environment..."; \
if [ -f "/nix/var/nix/profiles/default/etc/profile.d/nix-daemon.sh" ]; then \
. "/nix/var/nix/profiles/default/etc/profile.d/nix-daemon.sh"; \
fi; \
if [ -f "$$HOME/.nix-profile/etc/profile.d/nix.sh" ]; then \
. "$$HOME/.nix-profile/etc/profile.d/nix.sh"; \
fi; \
PATH="/nix/var/nix/profiles/default/bin:$$HOME/.nix-profile/bin:$$PATH"; \
export PATH; \
echo "PATH is now:"; \
echo "$$PATH"; \
NIX_CMD=""; \
if command -v nix >/dev/null 2>&1; then \
echo "Found nix on PATH:"; \
command -v nix; \
NIX_CMD="nix"; \
else \
echo "nix not found on PATH, scanning /nix/store for a nix binary..."; \
for path in /nix/store/*-nix-*/bin/nix; do \
if [ -x "$$path" ]; then \
echo "Found nix binary at $$path"; \
NIX_CMD="$$path"; \
break; \
fi; \
done; \
fi; \
if [ -z "$$NIX_CMD" ]; then \
echo "ERROR: nix binary not found anywhere cannot run devShell"; \
exit 1; \
fi; \
echo "Using Nix command: $$NIX_CMD"; \
echo "Run E2E tests inside Nix devShell (tests/e2e)..."; \
git config --global --add safe.directory /src || true; \
cd /src; \
"$$NIX_CMD" develop .#default --no-write-lock-file -c \
python3 -m unittest discover \
-s /src/tests/e2e \
-p "test_*.py"; \
' || exit $$?; \
done
test-container:
@bash scripts/test/test-container.sh
# ------------------------------------------------------------
# Build only missing container images
# ------------------------------------------------------------
build-missing:
@bash scripts/build/build-image-missing.sh
# Combined test target for local + CI (unit + e2e + integration)
test: build test-unit test-e2e test-integration
test: build-missing test-container test-unit test-e2e test-integration
# ------------------------------------------------------------
# Installer for host systems (original logic)
# System install (native packages, calls scripts/installation/run-package.sh)
# ------------------------------------------------------------
install:
@if [ -n "$$IN_NIX_SHELL" ]; then \
echo "Nix shell detected (IN_NIX_SHELL=1). Skipping venv/pip install handled by Nix flake."; \
else \
echo "Making 'main.py' executable..."; \
chmod +x main.py; \
echo "Checking if global user virtual environment exists..."; \
mkdir -p "$$HOME/.venvs"; \
if [ ! -d "$$HOME/.venvs/pkgmgr" ]; then \
echo "Creating global venv at $$HOME/.venvs/pkgmgr..."; \
python3 -m venv "$$HOME/.venvs/pkgmgr"; \
fi; \
echo "Installing required Python packages into $$HOME/.venvs/pkgmgr..."; \
"$$HOME/.venvs/pkgmgr/bin/python" -m ensurepip --upgrade; \
"$$HOME/.venvs/pkgmgr/bin/pip" install --upgrade pip setuptools wheel; \
echo "Looking for requirements.txt / _requirements.txt..."; \
if [ -f requirements.txt ]; then \
echo "Installing Python packages from requirements.txt..."; \
"$$HOME/.venvs/pkgmgr/bin/pip" install -r requirements.txt; \
elif [ -f _requirements.txt ]; then \
echo "Installing Python packages from _requirements.txt..."; \
"$$HOME/.venvs/pkgmgr/bin/pip" install -r _requirements.txt; \
else \
echo "No requirements.txt or _requirements.txt found, skipping dependency installation."; \
fi; \
echo "Ensuring $$HOME/.bashrc and $$HOME/.zshrc exist..."; \
touch "$$HOME/.bashrc" "$$HOME/.zshrc"; \
echo "Ensuring automatic activation of $$HOME/.venvs/pkgmgr for this user..."; \
for rc in "$$HOME/.bashrc" "$$HOME/.zshrc"; do \
rc_line='if [ -d "$${HOME}/.venvs/pkgmgr" ]; then . "$${HOME}/.venvs/pkgmgr/bin/activate"; if [ -n "$${PS1:-}" ]; then echo "Global Python virtual environment '\''~/.venvs/pkgmgr'\'' activated."; fi; fi'; \
grep -qxF "$${rc_line}" "$$rc" || echo "$${rc_line}" >> "$$rc"; \
done; \
echo "Arch/Manjaro detection and optional AUR setup..."; \
if command -v pacman >/dev/null 2>&1; then \
$(MAKE) aur_builder_setup; \
else \
echo "Not Arch-based (no pacman). Skipping aur_builder/yay setup."; \
fi; \
echo "Installation complete. Please restart your shell (or 'exec bash' or 'exec zsh') for the changes to take effect."; \
fi
# ------------------------------------------------------------
# AUR builder setup — only on Arch/Manjaro
# ------------------------------------------------------------
aur_builder_setup:
@echo "Setting up aur_builder and yay (Arch/Manjaro)..."
@sudo pacman -Syu --noconfirm
@sudo pacman -S --needed --noconfirm base-devel git sudo
@if ! getent group aur_builder >/dev/null; then sudo groupadd -r aur_builder; fi
@if ! id -u aur_builder >/dev/null 2>&1; then sudo useradd -m -r -g aur_builder -s /bin/bash aur_builder; fi
@echo '%aur_builder ALL=(ALL) NOPASSWD: /usr/bin/pacman' | sudo tee /etc/sudoers.d/aur_builder >/dev/null
@sudo chmod 0440 /etc/sudoers.d/aur_builder
@if ! sudo -u aur_builder bash -lc 'command -v yay >/dev/null'; then \
sudo -u aur_builder bash -lc 'cd ~ && rm -rf yay && git clone https://aur.archlinux.org/yay.git && cd yay && makepkg -si --noconfirm'; \
else \
echo "yay already installed."; \
fi
@echo "aur_builder/yay setup complete."
@echo "Building and installing distro-native package-manager for this system..."
@bash scripts/installation/run-package.sh
# ------------------------------------------------------------
# Uninstall target
# ------------------------------------------------------------
uninstall:
@echo "Removing global user virtual environment if it exists..."
@rm -rf "$$HOME/.venvs/pkgmgr"
@echo "Cleaning up $$HOME/.bashrc and $$HOME/.zshrc entries..."
@for rc in "$$HOME/.bashrc" "$$HOME/.zshrc"; do \
sed -i '/\.venvs\/pkgmgr\/bin\/activate"; if \[ -n "\$${PS1:-}" \]; then echo "Global Python virtual environment '\''~\/\.venvs\/pkgmgr'\'' activated."; fi; fi/d' "$$rc"; \
done
@echo "Uninstallation complete. Please restart your shell (or 'exec bash' or 'exec zsh') for the changes to fully apply."
@bash scripts/uninstall.sh

View File

@@ -1,7 +1,7 @@
# Maintainer: Kevin Veen-Birkenbach <info@veen.world>
pkgname=package-manager
pkgver=0.3.0
pkgver=0.6.0
pkgrel=1
pkgdesc="Local-flake wrapper for Kevin's package-manager (Nix-based)."
arch=('any')

42
debian/changelog vendored
View File

@@ -1,3 +1,45 @@
package-manager (0.6.0-1) unstable; urgency=medium
* Expose DISTROS and BASE_IMAGE_* variables as exported Makefile environment variables so all build and test commands can consume them dynamically. By exporting these values, every Make target (e.g., build, build-no-cache, build-missing, test-container, test-unit, test-e2e) and every delegated script in scripts/build/ and scripts/test/ now receives a consistent view of the supported distributions and their base container images. This change removes duplicated definitions across scripts, ensures reproducible builds, and allows build tooling to react automatically when new distros or base images are added to the Makefile.
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 05:59:58 +0100
package-manager (0.5.1-1) unstable; urgency=medium
* Refine pkgmgr release CLI close wiring and integration tests for --close flag (ChatGPT: https://chatgpt.com/share/69376b4e-8440-800f-9d06-535ec1d7a40e)
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 01:21:31 +0100
package-manager (0.5.0-1) unstable; urgency=medium
* Add pkgmgr branch close subcommand, extend CLI parser wiring, and add unit tests for branch handling and version version-selection logic (see ChatGPT conversation: https://chatgpt.com/share/693762a3-9ea8-800f-a640-bc78170953d1)
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 00:44:16 +0100
package-manager (0.4.3-1) unstable; urgency=medium
* Implement current-directory repository selection for release and proxy commands, unify selection semantics across CLI layers, extend release workflow with --close, integrate branch closing logic, fix wiring for get_repo_identifier/get_repo_dir, update packaging files (PKGBUILD, spec, flake.nix, pyproject), and add comprehensive unit/e2e tests for release and branch commands (see ChatGPT conversation: https://chatgpt.com/share/69375cfe-9e00-800f-bd65-1bd5937e1696)
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 00:29:08 +0100
package-manager (0.4.2-1) unstable; urgency=medium
* Wire pkgmgr release CLI to new helper and add unit tests (see ChatGPT conversation: https://chatgpt.com/share/69374f09-c760-800f-92e4-5b44a4510b62)
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 00:03:46 +0100
package-manager (0.4.1-1) unstable; urgency=medium
* Add branch close subcommand and integrate release close/editor flow (ChatGPT: https://chatgpt.com/share/69374f09-c760-800f-92e4-5b44a4510b62)
-- Kevin Veen-Birkenbach <kevin@veen.world> Mon, 08 Dec 2025 23:20:28 +0100
package-manager (0.4.0-1) unstable; urgency=medium
* Add branch closing helper and --close flag to release command, including CLI wiring and tests (see https://chatgpt.com/share/69374aec-74ec-800f-bde3-5d91dfdb9b91)
-- Kevin Veen-Birkenbach <kevin@veen.world> Mon, 08 Dec 2025 23:02:43 +0100
package-manager (0.3.0-1) unstable; urgency=medium
* Massive refactor and feature expansion:

View File

@@ -31,7 +31,7 @@
rec {
pkgmgr = pyPkgs.buildPythonApplication {
pname = "package-manager";
version = "0.3.0";
version = "0.6.0";
# Use the git repo as source
src = ./.;

View File

@@ -1,5 +1,5 @@
Name: package-manager
Version: 0.3.0
Version: 0.6.0
Release: 1%{?dist}
Summary: Wrapper that runs Kevin's package-manager via Nix flake
@@ -35,35 +35,36 @@ available on the system.
%install
rm -rf %{buildroot}
install -d %{buildroot}%{_bindir}
install -d %{buildroot}%{_libdir}/package-manager
# Install project tree into a fixed, architecture-independent location.
install -d %{buildroot}/usr/lib/package-manager
# Copy full project source into /usr/lib/package-manager
cp -a . %{buildroot}%{_libdir}/package-manager/
cp -a . %{buildroot}/usr/lib/package-manager/
# Wrapper
install -m0755 scripts/pkgmgr-wrapper.sh %{buildroot}%{_bindir}/pkgmgr
# Shared Nix init script (ensure it is executable in the installed tree)
install -m0755 scripts/init-nix.sh %{buildroot}%{_libdir}/package-manager/init-nix.sh
install -m0755 scripts/init-nix.sh %{buildroot}/usr/lib/package-manager/init-nix.sh
# Remove packaging-only and development artefacts from the installed tree
rm -rf \
%{buildroot}%{_libdir}/package-manager/PKGBUILD \
%{buildroot}%{_libdir}/package-manager/Dockerfile \
%{buildroot}%{_libdir}/package-manager/debian \
%{buildroot}%{_libdir}/package-manager/.git \
%{buildroot}%{_libdir}/package-manager/.github \
%{buildroot}%{_libdir}/package-manager/tests \
%{buildroot}%{_libdir}/package-manager/.gitignore \
%{buildroot}%{_libdir}/package-manager/__pycache__ \
%{buildroot}%{_libdir}/package-manager/.gitkeep || true
%{buildroot}/usr/lib/package-manager/PKGBUILD \
%{buildroot}/usr/lib/package-manager/Dockerfile \
%{buildroot}/usr/lib/package-manager/debian \
%{buildroot}/usr/lib/package-manager/.git \
%{buildroot}/usr/lib/package-manager/.github \
%{buildroot}/usr/lib/package-manager/tests \
%{buildroot}/usr/lib/package-manager/.gitignore \
%{buildroot}/usr/lib/package-manager/__pycache__ \
%{buildroot}/usr/lib/package-manager/.gitkeep || true
%post
# Initialize Nix (if needed) after installing the package-manager files.
if [ -x %{_libdir}/package-manager/init-nix.sh ]; then
%{_libdir}/package-manager/init-nix.sh || true
if [ -x /usr/lib/package-manager/init-nix.sh ]; then
/usr/lib/package-manager/init-nix.sh || true
else
echo ">>> Warning: %{_libdir}/package-manager/init-nix.sh not found or not executable."
echo ">>> Warning: /usr/lib/package-manager/init-nix.sh not found or not executable."
fi
%postun
@@ -73,7 +74,7 @@ echo ">>> package-manager removed. Nix itself was not removed."
%doc README.md
%license LICENSE
%{_bindir}/pkgmgr
%{_libdir}/package-manager/
/usr/lib/package-manager/
%changelog
* Sat Dec 06 2025 Kevin Veen-Birkenbach <info@veen.world> - 0.1.1-1

View File

@@ -1,3 +1,4 @@
# pkgmgr/branch_commands.py
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
@@ -12,7 +13,7 @@ from __future__ import annotations
from typing import Optional
from pkgmgr.git_utils import run_git, GitError
from pkgmgr.git_utils import run_git, GitError, get_current_branch
def open_branch(
@@ -78,3 +79,136 @@ def open_branch(
raise RuntimeError(
f"Failed to push new branch {name!r} to origin: {exc}"
) from exc
def _resolve_base_branch(
preferred: str,
fallback: str,
cwd: str,
) -> str:
"""
Resolve the base branch to use for merging.
Try `preferred` (default: main) first, then `fallback` (default: master).
Raise RuntimeError if neither exists.
"""
for candidate in (preferred, fallback):
try:
run_git(["rev-parse", "--verify", candidate], cwd=cwd)
return candidate
except GitError:
continue
raise RuntimeError(
f"Neither {preferred!r} nor {fallback!r} exist in this repository."
)
def close_branch(
name: Optional[str],
base_branch: str = "main",
fallback_base: str = "master",
cwd: str = ".",
) -> None:
"""
Merge a feature branch into the main/master branch and optionally delete it.
Steps:
1) Determine branch name (argument or current branch)
2) Resolve base branch (prefers `base_branch`, falls back to `fallback_base`)
3) Ask for confirmation (y/N)
4) git fetch origin
5) git checkout <base>
6) git pull origin <base>
7) git merge --no-ff <name>
8) git push origin <base>
9) Delete branch locally and on origin
If the user does not confirm with 'y', the operation is aborted.
"""
# 1) Determine which branch to close
if not name:
try:
name = get_current_branch(cwd=cwd)
except GitError as exc:
raise RuntimeError(f"Failed to detect current branch: {exc}") from exc
if not name:
raise RuntimeError("Branch name must not be empty.")
# 2) Resolve base branch (main/master)
target_base = _resolve_base_branch(base_branch, fallback_base, cwd=cwd)
if name == target_base:
raise RuntimeError(
f"Refusing to close base branch {target_base!r}. "
"Please specify a feature branch."
)
# 3) Confirmation prompt
prompt = (
f"Merge branch '{name}' into '{target_base}' and delete it afterwards? "
"(y/N): "
)
answer = input(prompt).strip().lower()
if answer != "y":
print("Aborted closing branch.")
return
# 4) Fetch from origin
try:
run_git(["fetch", "origin"], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to fetch from origin before closing branch {name!r}: {exc}"
) from exc
# 5) Checkout base branch
try:
run_git(["checkout", target_base], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to checkout base branch {target_base!r}: {exc}"
) from exc
# 6) Pull latest base
try:
run_git(["pull", "origin", target_base], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to pull latest changes for base branch {target_base!r}: {exc}"
) from exc
# 7) Merge feature branch into base
try:
run_git(["merge", "--no-ff", name], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to merge branch {name!r} into {target_base!r}: {exc}"
) from exc
# 8) Push updated base
try:
run_git(["push", "origin", target_base], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to push base branch {target_base!r} to origin after merge: {exc}"
) from exc
# 9) Delete feature branch locally
try:
run_git(["branch", "-d", name], cwd=cwd)
except GitError as exc:
raise RuntimeError(
f"Failed to delete local branch {name!r} after merge: {exc}"
) from exc
# 10) Delete feature branch on origin (best effort)
try:
run_git(["push", "origin", "--delete", name], cwd=cwd)
except GitError as exc:
# Remote delete is nice-to-have; surface as RuntimeError for clarity.
raise RuntimeError(
f"Branch {name!r} was deleted locally, but remote deletion failed: {exc}"
) from exc

View File

@@ -1,9 +1,10 @@
# pkgmgr/cli_core/commands/branch.py
from __future__ import annotations
import sys
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.branch_commands import open_branch
from pkgmgr.branch_commands import open_branch, close_branch
def handle_branch(args, ctx: CLIContext) -> None:
@@ -11,7 +12,8 @@ def handle_branch(args, ctx: CLIContext) -> None:
Handle `pkgmgr branch` subcommands.
Currently supported:
- pkgmgr branch open [<name>] [--base <branch>]
- pkgmgr branch open [<name>] [--base <branch>]
- pkgmgr branch close [<name>] [--base <branch>]
"""
if args.subcommand == "open":
open_branch(
@@ -21,5 +23,13 @@ def handle_branch(args, ctx: CLIContext) -> None:
)
return
if args.subcommand == "close":
close_branch(
name=getattr(args, "name", None),
base_branch=getattr(args, "base", "main"),
cwd=".",
)
return
print(f"Unknown branch subcommand: {args.subcommand}")
sys.exit(2)

View File

@@ -1,12 +1,31 @@
# pkgmgr/cli_core/commands/release.py
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Release command wiring for the pkgmgr CLI.
This module implements the `pkgmgr release` subcommand on top of the
generic selection logic from cli_core.dispatch. It does not define its
own subparser; the CLI surface is configured in cli_core.parser.
Responsibilities:
- Take the parsed argparse.Namespace for the `release` command.
- Use the list of selected repositories provided by dispatch_command().
- Optionally list affected repositories when --list is set.
- For each selected repository, run pkgmgr.release.release(...) in
the context of that repository directory.
"""
from __future__ import annotations
import os
import sys
from typing import Any, Dict, List, Optional
from typing import Any, Dict, List
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.get_repo_dir import get_repo_dir
from pkgmgr import release as rel
from pkgmgr.get_repo_identifier import get_repo_identifier
from pkgmgr.release import release as run_release
Repository = Dict[str, Any]
@@ -18,59 +37,63 @@ def handle_release(
selected: List[Repository],
) -> None:
"""
Handle the 'release' command.
Handle the `pkgmgr release` subcommand.
Creates a release by incrementing the version and updating the changelog
in a single selected repository.
Important:
- Releases are strictly limited to exactly ONE repository.
- Using --all or specifying multiple identifiers for release does
not make sense and is therefore rejected.
- The --preview flag is respected and passed through to the release
implementation so that no changes are made in preview mode.
Flow:
1) Use the `selected` repositories as computed by dispatch_command().
2) If --list is given, print the identifiers of the selected repos
and return without running any release.
3) For each selected repository:
- Resolve its identifier and local directory.
- Change into that directory.
- Call pkgmgr.release.release(...) with the parsed options.
"""
if not selected:
print("No repositories selected for release.")
sys.exit(1)
print("[pkgmgr] No repositories selected for release.")
return
# List-only mode: show which repositories would be affected.
if getattr(args, "list", False):
print("[pkgmgr] Repositories that would be affected by this release:")
for repo in selected:
identifier = get_repo_identifier(repo, ctx.all_repositories)
print(f" - {identifier}")
return
for repo in selected:
identifier = get_repo_identifier(repo, ctx.all_repositories)
repo_dir = repo.get("directory")
if not repo_dir:
try:
repo_dir = get_repo_dir(ctx.repositories_base_dir, repo)
except Exception:
repo_dir = None
if not repo_dir or not os.path.isdir(repo_dir):
print(
f"[WARN] Skipping repository {identifier}: "
"local directory does not exist."
)
continue
if len(selected) > 1:
print(
"[ERROR] Release operations are limited to a single repository.\n"
"Do not use --all or multiple identifiers with 'pkgmgr release'."
f"[pkgmgr] Running release for repository {identifier} "
f"in '{repo_dir}'..."
)
sys.exit(1)
original_dir = os.getcwd()
repo = selected[0]
repo_dir: Optional[str] = repo.get("directory")
if not repo_dir:
repo_dir = get_repo_dir(ctx.repositories_base_dir, repo)
if not os.path.isdir(repo_dir):
print(
f"[ERROR] Repository directory does not exist locally: {repo_dir}"
)
sys.exit(1)
pyproject_path = os.path.join(repo_dir, "pyproject.toml")
changelog_path = os.path.join(repo_dir, "CHANGELOG.md")
print(
f"Releasing repository '{repo.get('repository')}' in '{repo_dir}'..."
)
os.chdir(repo_dir)
try:
rel.release(
pyproject_path=pyproject_path,
changelog_path=changelog_path,
release_type=args.release_type,
message=args.message,
preview=getattr(args, "preview", False),
)
finally:
os.chdir(original_dir)
# Change to repo directory and invoke the helper.
cwd_before = os.getcwd()
try:
os.chdir(repo_dir)
run_release(
pyproject_path="pyproject.toml",
changelog_path="CHANGELOG.md",
release_type=args.release_type,
message=args.message or None,
preview=getattr(args, "preview", False),
force=getattr(args, "force", False),
close=getattr(args, "close", False),
)
finally:
os.chdir(cwd_before)

View File

@@ -3,12 +3,14 @@
from __future__ import annotations
import os
import sys
from typing import List
from typing import List, Dict, Any
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.cli_core.proxy import maybe_handle_proxy
from pkgmgr.get_selected_repos import get_selected_repos
from pkgmgr.get_repo_dir import get_repo_dir
from pkgmgr.cli_core.commands import (
handle_repos_command,
@@ -22,6 +24,63 @@ from pkgmgr.cli_core.commands import (
)
def _has_explicit_selection(args) -> bool:
"""
Return True if the user explicitly selected repositories via
identifiers / --all / --category / --tag / --string.
"""
identifiers = getattr(args, "identifiers", []) or []
use_all = getattr(args, "all", False)
categories = getattr(args, "category", []) or []
tags = getattr(args, "tag", []) or []
string_filter = getattr(args, "string", "") or ""
return bool(
use_all
or identifiers
or categories
or tags
or string_filter
)
def _select_repo_for_current_directory(
ctx: CLIContext,
) -> List[Dict[str, Any]]:
"""
Heuristic: find the repository whose local directory matches the
current working directory or is the closest parent.
Example:
- Repo directory: /home/kevin/Repositories/foo
- CWD: /home/kevin/Repositories/foo/subdir
'foo' is selected.
"""
cwd = os.path.abspath(os.getcwd())
candidates: List[tuple[str, Dict[str, Any]]] = []
for repo in ctx.all_repositories:
repo_dir = repo.get("directory")
if not repo_dir:
try:
repo_dir = get_repo_dir(ctx.repositories_base_dir, repo)
except Exception:
repo_dir = None
if not repo_dir:
continue
repo_dir_abs = os.path.abspath(os.path.expanduser(repo_dir))
if cwd == repo_dir_abs or cwd.startswith(repo_dir_abs + os.sep):
candidates.append((repo_dir_abs, repo))
if not candidates:
return []
# Pick the repo with the longest (most specific) path.
candidates.sort(key=lambda item: len(item[0]), reverse=True)
return [candidates[0][1]]
def dispatch_command(args, ctx: CLIContext) -> None:
"""
Dispatch the parsed arguments to the appropriate command handler.
@@ -52,7 +111,15 @@ def dispatch_command(args, ctx: CLIContext) -> None:
]
if getattr(args, "command", None) in commands_with_selection:
selected = get_selected_repos(args, ctx.all_repositories)
if _has_explicit_selection(args):
# Classic selection logic (identifiers / --all / filters)
selected = get_selected_repos(args, ctx.all_repositories)
else:
# Default per help text: repository of current folder.
selected = _select_repo_for_current_directory(ctx)
# If none is found, leave 'selected' empty.
# Individual handlers will then emit a clear message instead
# of silently picking an unrelated repository.
else:
selected = []

View File

@@ -316,7 +316,7 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
# ------------------------------------------------------------
branch_parser = subparsers.add_parser(
"branch",
help="Branch-related utilities (e.g. open feature branches)",
help="Branch-related utilities (e.g. open/close feature branches)",
)
branch_subparsers = branch_parser.add_subparsers(
dest="subcommand",
@@ -342,6 +342,27 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
help="Base branch to create the new branch from (default: main)",
)
branch_close = branch_subparsers.add_parser(
"close",
help="Merge a feature branch into base and delete it",
)
branch_close.add_argument(
"name",
nargs="?",
help=(
"Name of the branch to close (optional; current branch is used "
"if omitted)"
),
)
branch_close.add_argument(
"--base",
default="main",
help=(
"Base branch to merge into (default: main; falls back to master "
"internally if main does not exist)"
),
)
# ------------------------------------------------------------
# release
# ------------------------------------------------------------
@@ -360,10 +381,32 @@ def create_parser(description_text: str) -> argparse.ArgumentParser:
release_parser.add_argument(
"-m",
"--message",
default="",
help="Optional release message to add to the changelog and tag.",
default=None,
help=(
"Optional release message to add to the changelog and tag."
),
)
# Generic selection / preview / list / extra_args
add_identifier_arguments(release_parser)
# Close current branch after successful release
release_parser.add_argument(
"--close",
action="store_true",
help=(
"Close the current branch after a successful release in each "
"repository, if it is not main/master."
),
)
# Force: skip preview+confirmation and run release directly
release_parser.add_argument(
"-f",
"--force",
action="store_true",
help=(
"Skip the interactive preview+confirmation step and run the "
"release directly."
),
)
# ------------------------------------------------------------
# version

View File

@@ -4,14 +4,16 @@
from __future__ import annotations
import argparse
import os
import sys
from typing import Dict, List
from typing import Dict, List, Any
from pkgmgr.cli_core.context import CLIContext
from pkgmgr.clone_repos import clone_repos
from pkgmgr.exec_proxy_command import exec_proxy_command
from pkgmgr.pull_with_verification import pull_with_verification
from pkgmgr.get_selected_repos import get_selected_repos
from pkgmgr.get_repo_dir import get_repo_dir
PROXY_COMMANDS: Dict[str, List[str]] = {
@@ -104,6 +106,57 @@ def _add_proxy_identifier_arguments(parser: argparse.ArgumentParser) -> None:
)
def _proxy_has_explicit_selection(args: argparse.Namespace) -> bool:
"""
Same semantics as in the main dispatch:
True if the user explicitly selected repositories.
"""
identifiers = getattr(args, "identifiers", []) or []
use_all = getattr(args, "all", False)
categories = getattr(args, "category", []) or []
string_filter = getattr(args, "string", "") or ""
# Proxy commands currently do not support --tag, so it is not checked here.
return bool(
use_all
or identifiers
or categories
or string_filter
)
def _select_repo_for_current_directory(
ctx: CLIContext,
) -> List[Dict[str, Any]]:
"""
Heuristic: find the repository whose local directory matches the
current working directory or is the closest parent.
"""
cwd = os.path.abspath(os.getcwd())
candidates: List[tuple[str, Dict[str, Any]]] = []
for repo in ctx.all_repositories:
repo_dir = repo.get("directory")
if not repo_dir:
try:
repo_dir = get_repo_dir(ctx.repositories_base_dir, repo)
except Exception:
repo_dir = None
if not repo_dir:
continue
repo_dir_abs = os.path.abspath(os.path.expanduser(repo_dir))
if cwd == repo_dir_abs or cwd.startswith(repo_dir_abs + os.sep):
candidates.append((repo_dir_abs, repo))
if not candidates:
return []
# Pick the repo with the longest (most specific) path.
candidates.sort(key=lambda item: len(item[0]), reverse=True)
return [candidates[0][1]]
def register_proxy_commands(
subparsers: argparse._SubParsersAction,
) -> None:
@@ -157,7 +210,17 @@ def maybe_handle_proxy(args: argparse.Namespace, ctx: CLIContext) -> bool:
if args.command not in all_proxy_subcommands:
return False
selected = get_selected_repos(args, ctx.all_repositories)
# Default semantics: without explicit selection → repo of current folder.
if _proxy_has_explicit_selection(args):
selected = get_selected_repos(args, ctx.all_repositories)
else:
selected = _select_repo_for_current_directory(ctx)
if not selected:
print(
"[ERROR] No repository matches the current directory. "
"Specify identifiers or use --all/--category/--string."
)
sys.exit(1)
for command, subcommands in PROXY_COMMANDS.items():
if args.command not in subcommands:
@@ -194,4 +257,4 @@ def maybe_handle_proxy(args: argparse.Namespace, ctx: CLIContext) -> bool:
sys.exit(0)
return True
return True

View File

@@ -1,3 +1,4 @@
# pkgmgr/release.py
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
@@ -22,12 +23,14 @@ Additional behaviour:
phases:
1) Preview-only run (dry-run).
2) Interactive confirmation, then real release if confirmed.
This confirmation can be skipped with the `-f/--force` flag.
This confirmation can be skipped with the `force=True` flag.
- If `close=True` is used and the current branch is not main/master,
the branch will be closed via branch_commands.close_branch() after
a successful release.
"""
from __future__ import annotations
import argparse
import os
import re
import subprocess
@@ -37,6 +40,7 @@ from datetime import date, datetime
from typing import Optional, Tuple
from pkgmgr.git_utils import get_tags, get_current_branch, GitError
from pkgmgr.branch_commands import close_branch
from pkgmgr.versioning import (
SemVer,
find_latest_version,
@@ -137,7 +141,6 @@ def _open_editor_for_changelog(initial_message: Optional[str] = None) -> str:
encoding="utf-8",
) as tmp:
tmp_path = tmp.name
# Prefill with instructions as comments
tmp.write(
"# Write the changelog entry for this release.\n"
"# Lines starting with '#' will be ignored.\n"
@@ -147,10 +150,14 @@ def _open_editor_for_changelog(initial_message: Optional[str] = None) -> str:
tmp.write(initial_message.strip() + "\n")
tmp.flush()
# Open editor
subprocess.call([editor, tmp_path])
try:
subprocess.call([editor, tmp_path])
except FileNotFoundError:
print(
f"[WARN] Editor {editor!r} not found; proceeding without "
"interactive changelog message."
)
# Read back content
try:
with open(tmp_path, "r", encoding="utf-8") as f:
content = f.read()
@@ -160,7 +167,6 @@ def _open_editor_for_changelog(initial_message: Optional[str] = None) -> str:
except OSError:
pass
# Filter out commented lines and return joined text
lines = [
line for line in content.splitlines()
if not line.strip().startswith("#")
@@ -186,14 +192,6 @@ def update_pyproject_version(
version = "X.Y.Z"
and replaces the version part with the given new_version string.
It does not try to parse the full TOML structure here. This keeps the
implementation small and robust as long as the version line follows
the standard pattern.
Behaviour:
- In normal mode: write the updated content back to the file.
- In preview mode: do NOT write, only report what would change.
"""
try:
with open(pyproject_path, "r", encoding="utf-8") as f:
@@ -231,13 +229,6 @@ def update_flake_version(
) -> None:
"""
Update the version in flake.nix, if present.
Looks for a line like:
version = "1.2.3";
and replaces the string inside the quotes. If the file does not
exist or no version line is found, this is treated as a non-fatal
condition and only a log message is printed.
"""
if not os.path.exists(flake_path):
print("[INFO] flake.nix not found, skipping.")
@@ -282,13 +273,6 @@ def update_pkgbuild_version(
Expects:
pkgver=1.2.3
pkgrel=1
Behaviour:
- Set pkgver to the new_version (e.g. 1.2.3).
- Reset pkgrel to 1.
If the file does not exist, this is non-fatal and only a log
message is printed.
"""
if not os.path.exists(pkgbuild_path):
print("[INFO] PKGBUILD not found, skipping.")
@@ -301,7 +285,6 @@ def update_pkgbuild_version(
print(f"[WARN] Could not read PKGBUILD: {exc}")
return
# Update pkgver
ver_pattern = r"^(pkgver\s*=\s*)(.+)$"
new_content, ver_count = re.subn(
ver_pattern,
@@ -312,9 +295,8 @@ def update_pkgbuild_version(
if ver_count == 0:
print("[WARN] No pkgver line found in PKGBUILD.")
new_content = content # revert to original if we didn't change anything
new_content = content
# Reset pkgrel to 1
rel_pattern = r"^(pkgrel\s*=\s*)(.+)$"
new_content, rel_count = re.subn(
rel_pattern,
@@ -343,19 +325,6 @@ def update_spec_version(
) -> None:
"""
Update the version in an RPM spec file, if present.
Assumes a file like 'package-manager.spec' with lines:
Version: 1.2.3
Release: 1%{?dist}
Behaviour:
- Set 'Version:' to new_version.
- Reset 'Release:' to '1' while preserving any macro suffix,
e.g. '1%{?dist}'.
If the file does not exist, this is non-fatal and only a log
message is printed.
"""
if not os.path.exists(spec_path):
print("[INFO] RPM spec file not found, skipping.")
@@ -368,7 +337,6 @@ def update_spec_version(
print(f"[WARN] Could not read spec file: {exc}")
return
# Update Version:
ver_pattern = r"^(Version:\s*)(.+)$"
new_content, ver_count = re.subn(
ver_pattern,
@@ -380,12 +348,10 @@ def update_spec_version(
if ver_count == 0:
print("[WARN] No 'Version:' line found in spec file.")
# Reset Release:
rel_pattern = r"^(Release:\s*)(.+)$"
def _release_repl(m: re.Match[str]) -> str: # type: ignore[name-defined]
rest = m.group(2).strip()
# Reset numeric prefix to "1" and keep any suffix (e.g. % macros).
match = re.match(r"^(\d+)(.*)$", rest)
if match:
suffix = match.group(2)
@@ -428,21 +394,11 @@ def update_changelog(
"""
Prepend a new release section to CHANGELOG.md with the new version,
current date, and a message.
Behaviour:
- If message is None and preview is False:
→ open $EDITOR (fallback 'nano') to let the user enter a message.
- If message is None and preview is True:
→ use a generic automated message.
- The resulting changelog entry is printed to stdout.
- Returns the final message text used.
"""
today = date.today().isoformat()
# Resolve message
if message is None:
if preview:
# Do not open editor in preview mode; keep it non-interactive.
message = "Automated release."
else:
print(
@@ -470,7 +426,6 @@ def update_changelog(
new_changelog = header + "\n" + changelog if changelog else header
# Show the entry that will be written
print("\n================ CHANGELOG ENTRY ================")
print(header.rstrip())
print("=================================================\n")
@@ -495,8 +450,6 @@ def update_changelog(
def _get_git_config_value(key: str) -> Optional[str]:
"""
Try to read a value from `git config --get <key>`.
Returns the stripped value or None if not set / on error.
"""
try:
result = subprocess.run(
@@ -515,12 +468,6 @@ def _get_git_config_value(key: str) -> Optional[str]:
def _get_debian_author() -> Tuple[str, str]:
"""
Determine the maintainer name/email for debian/changelog entries.
Priority:
1. DEBFULLNAME / DEBEMAIL
2. GIT_AUTHOR_NAME / GIT_AUTHOR_EMAIL
3. git config user.name / user.email
4. Fallback: 'Unknown Maintainer' / 'unknown@example.com'
"""
name = os.environ.get("DEBFULLNAME")
email = os.environ.get("DEBEMAIL")
@@ -552,12 +499,6 @@ def update_debian_changelog(
) -> None:
"""
Prepend a new entry to debian/changelog, if it exists.
The first line typically looks like:
package-name (1.2.3-1) unstable; urgency=medium
We generate a new stanza at the top with Debian-style version
'X.Y.Z-1'. If the file does not exist, this function does nothing.
"""
if not os.path.exists(debian_changelog_path):
print("[INFO] debian/changelog not found, skipping.")
@@ -565,15 +506,12 @@ def update_debian_changelog(
debian_version = f"{new_version}-1"
now = datetime.now().astimezone()
# Debian-like date string, e.g. "Mon, 08 Dec 2025 12:34:56 +0100"
date_str = now.strftime("%a, %d %b %Y %H:%M:%S %z")
author_name, author_email = _get_debian_author()
first_line = f"{package_name} ({debian_version}) unstable; urgency=medium"
body_line = (
message.strip() if message else f"Automated release {new_version}."
)
body_line = message.strip() if message else f"Automated release {new_version}."
stanza = (
f"{first_line}\n\n"
f" * {body_line}\n\n"
@@ -613,23 +551,12 @@ def _release_impl(
release_type: str = "patch",
message: Optional[str] = None,
preview: bool = False,
close: bool = False,
) -> None:
"""
Internal implementation that performs a single-phase release.
If `preview` is True:
- No files are written.
- No git commands are executed.
- Planned actions are printed.
If `preview` is False:
- Files are updated.
- Git commit, tag, and push are executed.
"""
# 1) Determine the current version from Git tags.
current_ver = _determine_current_version()
# 2) Compute the next version.
new_ver = _bump_semver(current_ver, release_type)
new_ver_str = str(new_ver)
new_tag = new_ver.to_tag(with_prefix=True)
@@ -639,20 +566,16 @@ def _release_impl(
print(f"Current version: {current_ver}")
print(f"New version: {new_ver_str} ({release_type})")
# Determine repository root based on pyproject location
repo_root = os.path.dirname(os.path.abspath(pyproject_path))
# 2) Update files.
update_pyproject_version(pyproject_path, new_ver_str, preview=preview)
# Let update_changelog resolve or edit the message; reuse it for debian.
message = update_changelog(
changelog_message = update_changelog(
changelog_path,
new_ver_str,
message=message,
preview=preview,
)
# Additional packaging files (non-fatal if missing)
flake_path = os.path.join(repo_root, "flake.nix")
update_flake_version(flake_path, new_ver_str, preview=preview)
@@ -662,20 +585,23 @@ def _release_impl(
spec_path = os.path.join(repo_root, "package-manager.spec")
update_spec_version(spec_path, new_ver_str, preview=preview)
effective_message: Optional[str] = message
if effective_message is None and isinstance(changelog_message, str):
if changelog_message.strip():
effective_message = changelog_message.strip()
debian_changelog_path = os.path.join(repo_root, "debian", "changelog")
# Use repo directory name as a simple default for package name
package_name = os.path.basename(repo_root) or "package-manager"
update_debian_changelog(
debian_changelog_path,
package_name=package_name,
new_version=new_ver_str,
message=message,
message=effective_message,
preview=preview,
)
# 3) Git operations: stage, commit, tag, push.
commit_msg = f"Release version {new_ver_str}"
tag_msg = message or commit_msg
tag_msg = effective_message or commit_msg
try:
branch = get_current_branch() or "main"
@@ -683,7 +609,6 @@ def _release_impl(
branch = "main"
print(f"Releasing on branch: {branch}")
# Stage all relevant packaging files so they are included in the commit
files_to_add = [
pyproject_path,
changelog_path,
@@ -701,6 +626,18 @@ def _release_impl(
print(f'[PREVIEW] Would run: git tag -a {new_tag} -m "{tag_msg}"')
print(f"[PREVIEW] Would run: git push origin {branch}")
print("[PREVIEW] Would run: git push origin --tags")
if close and branch not in ("main", "master"):
print(
f"[PREVIEW] Would also close branch {branch} after the release "
"(close=True and branch is not main/master)."
)
elif close:
print(
f"[PREVIEW] close=True but current branch is {branch}; "
"no branch would be closed."
)
print("Preview completed. No changes were made.")
return
@@ -714,9 +651,26 @@ def _release_impl(
print(f"Release {new_ver_str} completed.")
if close:
if branch in ("main", "master"):
print(
f"[INFO] close=True but current branch is {branch}; "
"nothing to close."
)
return
print(
f"[INFO] Closing branch {branch} after successful release "
"(close=True and branch is not main/master)..."
)
try:
close_branch(name=branch, base_branch="main", cwd=".")
except Exception as exc: # pragma: no cover
print(f"[WARN] Failed to close branch {branch} automatically: {exc}")
# ---------------------------------------------------------------------------
# Public release entry point (with preview-first + confirmation logic)
# Public release entry point
# ---------------------------------------------------------------------------
@@ -727,6 +681,7 @@ def release(
message: Optional[str] = None,
preview: bool = False,
force: bool = False,
close: bool = False,
) -> None:
"""
High-level release entry point.
@@ -735,26 +690,13 @@ def release(
- preview=True:
* Single-phase PREVIEW only.
* No files are changed, no git commands are executed.
* `force` is ignored in this mode.
- preview=False, force=True:
* Single-phase REAL release, no interactive preview.
* Files are changed and git commands are executed immediately.
- preview=False, force=False:
* Two-phase flow (intended default for interactive CLI use):
1) PREVIEW: dry-run, printing all planned actions.
2) Ask the user for confirmation:
"Proceed with the actual release? [y/N]: "
If confirmed, perform the REAL release.
Otherwise, abort without changes.
* In non-interactive environments (stdin not a TTY), the
confirmation step is skipped automatically and a single
REAL phase is executed, to avoid blocking on input().
* Two-phase flow (intended default for interactive CLI use).
"""
# Explicit preview mode: just do a single PREVIEW phase and exit.
if preview:
_release_impl(
pyproject_path=pyproject_path,
@@ -762,10 +704,10 @@ def release(
release_type=release_type,
message=message,
preview=True,
close=close,
)
return
# Non-preview, but forced: run REAL release directly.
if force:
_release_impl(
pyproject_path=pyproject_path,
@@ -773,10 +715,10 @@ def release(
release_type=release_type,
message=message,
preview=False,
close=close,
)
return
# Non-interactive environment? Skip confirmation to avoid blocking.
if not sys.stdin.isatty():
_release_impl(
pyproject_path=pyproject_path,
@@ -784,10 +726,10 @@ def release(
release_type=release_type,
message=message,
preview=False,
close=close,
)
return
# Interactive two-phase flow:
print("[INFO] Running preview before actual release...\n")
_release_impl(
pyproject_path=pyproject_path,
@@ -795,9 +737,9 @@ def release(
release_type=release_type,
message=message,
preview=True,
close=close,
)
# Ask for confirmation
try:
answer = input("Proceed with the actual release? [y/N]: ").strip().lower()
except (EOFError, KeyboardInterrupt):
@@ -815,68 +757,5 @@ def release(
release_type=release_type,
message=message,
preview=False,
)
# ---------------------------------------------------------------------------
# CLI entry point for standalone use
# ---------------------------------------------------------------------------
def _parse_args(argv: Optional[list[str]] = None) -> argparse.Namespace:
parser = argparse.ArgumentParser(description="pkgmgr release helper")
parser.add_argument(
"release_type",
choices=["major", "minor", "patch"],
help="Type of release (major/minor/patch).",
)
parser.add_argument(
"-m",
"--message",
dest="message",
default=None,
help="Release message to use for changelog and tag.",
)
parser.add_argument(
"--pyproject",
dest="pyproject",
default="pyproject.toml",
help="Path to pyproject.toml (default: pyproject.toml)",
)
parser.add_argument(
"--changelog",
dest="changelog",
default="CHANGELOG.md",
help="Path to CHANGELOG.md (default: CHANGELOG.md)",
)
parser.add_argument(
"--preview",
action="store_true",
help=(
"Preview release changes without modifying files or running git. "
"This mode never executes the real release."
),
)
parser.add_argument(
"-f",
"--force",
dest="force",
action="store_true",
help=(
"Skip the interactive preview+confirmation step and run the "
"release directly."
),
)
return parser.parse_args(argv)
if __name__ == "__main__":
args = _parse_args()
release(
pyproject_path=args.pyproject,
changelog_path=args.changelog,
release_type=args.release_type,
message=args.message,
preview=args.preview,
force=args.force,
close=close,
)

View File

@@ -7,7 +7,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "package-manager"
version = "0.3.0"
version = "0.6.0"
description = "Kevin's package-manager tool (pkgmgr)"
readme = "README.md"
requires-python = ">=3.11"

View File

@@ -0,0 +1,35 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "${SCRIPT_DIR}/resolve-base-image.sh"
echo "============================================================"
echo ">>> Building ONLY missing container images"
echo "============================================================"
for distro in $DISTROS; do
IMAGE="package-manager-test-$distro"
BASE_IMAGE="$(resolve_base_image "$distro")"
if docker image inspect "$IMAGE" >/dev/null 2>&1; then
echo "[build-missing] Image already exists: $IMAGE (skipping)"
continue
fi
echo
echo "------------------------------------------------------------"
echo "[build-missing] Building missing image: $IMAGE"
echo "BASE_IMAGE = $BASE_IMAGE"
echo "------------------------------------------------------------"
docker build \
--build-arg BASE_IMAGE="$BASE_IMAGE" \
-t "$IMAGE" \
.
done
echo
echo "============================================================"
echo ">>> build-missing: Done"
echo "============================================================"

View File

@@ -0,0 +1,17 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "${SCRIPT_DIR}/resolve-base-image.sh"
for distro in $DISTROS; do
base_image="$(resolve_base_image "$distro")"
echo ">>> Building test image for distro '$distro' with NO CACHE (BASE_IMAGE=$base_image)..."
docker build \
--no-cache \
--build-arg BASE_IMAGE="$base_image" \
-t "package-manager-test-$distro" \
.
done

16
scripts/build/build-image.sh Executable file
View File

@@ -0,0 +1,16 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "${SCRIPT_DIR}/resolve-base-image.sh"
for distro in $DISTROS; do
base_image="$(resolve_base_image "$distro")"
echo ">>> Building test image for distro '$distro' (BASE_IMAGE=$base_image)..."
docker build \
--build-arg BASE_IMAGE="$base_image" \
-t "package-manager-test-$distro" \
.
done

View File

@@ -0,0 +1,18 @@
#!/usr/bin/env bash
set -euo pipefail
resolve_base_image() {
local distro="$1"
case "$distro" in
arch) echo "$BASE_IMAGE_ARCH" ;;
debian) echo "$BASE_IMAGE_DEBIAN" ;;
ubuntu) echo "$BASE_IMAGE_UBUNTU" ;;
fedora) echo "$BASE_IMAGE_FEDORA" ;;
centos) echo "$BASE_IMAGE_CENTOS" ;;
*)
echo "ERROR: Unknown distro '$distro'" >&2
exit 1
;;
esac
}

View File

@@ -1,22 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
echo "[entry] Using /src as working tree for package-manager..."
cd /src
# Optional: altes Paket entfernen
echo "[entry] Removing existing 'package-manager' Arch package (if installed)..."
pacman -Rns --noconfirm package-manager || true
# Build-Owner richtig setzen (falls /src vom Host kommt)
echo "[entry] Fixing ownership of /src for user 'builder'..."
chown -R builder:builder /src
echo "[entry] Rebuilding Arch package from /src as user 'builder'..."
su builder -c "cd /src && makepkg -s --noconfirm --clean"
echo "[entry] Installing freshly built package-manager-*.pkg.tar.*..."
pacman -U --noconfirm /src/package-manager-*.pkg.tar.*
echo "[entry] Handing off to pkgmgr with args: $*"
exec pkgmgr "$@"

61
scripts/docker/entry.sh Executable file
View File

@@ -0,0 +1,61 @@
#!/usr/bin/env bash
set -euo pipefail
# ---------------------------------------------------------------------------
# Ensure Nix has access to a valid CA bundle (TLS trust store)
# ---------------------------------------------------------------------------
if [[ -z "${NIX_SSL_CERT_FILE:-}" ]]; then
if [[ -f /etc/ssl/certs/ca-certificates.crt ]]; then
# Debian/Ubuntu-style path
export NIX_SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt
echo "[docker] Using CA bundle: ${NIX_SSL_CERT_FILE}"
elif [[ -f /etc/pki/tls/certs/ca-bundle.crt ]]; then
# Fedora/RHEL/CentOS-style path
export NIX_SSL_CERT_FILE=/etc/pki/tls/certs/ca-bundle.crt
echo "[docker] Using CA bundle: ${NIX_SSL_CERT_FILE}"
else
echo "[docker] WARNING: No CA bundle found for Nix (NIX_SSL_CERT_FILE not set)."
echo "[docker] HTTPS access for Nix flakes may fail."
fi
fi
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
echo "[docker] Starting package-manager container"
# Distro info for logging
if [[ -f /etc/os-release ]]; then
# shellcheck disable=SC1091
. /etc/os-release
echo "[docker] Detected distro: ${ID:-unknown} (like: ${ID_LIKE:-})"
fi
# Always use /src (mounted from host) as working directory
echo "[docker] Using /src as working directory"
cd /src
# ------------------------------------------------------------
# DEV mode: build/install package-manager from current /src
# ------------------------------------------------------------
if [[ "${PKGMGR_DEV:-0}" == "1" ]]; then
echo "[docker] DEV mode enabled (PKGMGR_DEV=1)"
echo "[docker] Rebuilding package-manager from /src via scripts/installation/run-package.sh..."
if [[ -x scripts/installation/run-package.sh ]]; then
bash scripts/installation/run-package.sh
else
echo "[docker] ERROR: scripts/installation/run-package.sh not found or not executable"
exit 1
fi
fi
# ------------------------------------------------------------
# Hand-off to pkgmgr / arbitrary command
# ------------------------------------------------------------
if [[ $# -eq 0 ]]; then
echo "[docker] No arguments provided. Showing pkgmgr help..."
exec pkgmgr --help
else
echo "[docker] Executing command: $*"
exec "$@"
fi

222
scripts/init-nix.sh Normal file → Executable file
View File

@@ -1,44 +1,208 @@
#!/usr/bin/env bash
set -euo pipefail
echo ">>> Initializing Nix environment for package-manager..."
echo "[init-nix] Starting Nix initialization..."
# 1. /nix store
if [ ! -d /nix ]; then
echo ">>> Creating /nix store directory"
mkdir -m 0755 /nix
chown root:root /nix
# ---------------------------------------------------------------------------
# Helper: detect whether we are inside a container (Docker/Podman/etc.)
# ---------------------------------------------------------------------------
is_container() {
# Docker / Podman markers
if [[ -f /.dockerenv ]] || [[ -f /run/.containerenv ]]; then
return 0
fi
# cgroup hints
if grep -qiE 'docker|container|podman|lxc' /proc/1/cgroup 2>/dev/null; then
return 0
fi
# Environment variable used by some runtimes
if [[ -n "${container:-}" ]]; then
return 0
fi
return 1
}
# ---------------------------------------------------------------------------
# Helper: ensure Nix binaries are on PATH (multi-user or single-user)
# ---------------------------------------------------------------------------
ensure_nix_on_path() {
# Multi-user profile (daemon install)
if [[ -x /nix/var/nix/profiles/default/bin/nix ]]; then
export PATH="/nix/var/nix/profiles/default/bin:${PATH}"
fi
# Single-user profile (current user)
if [[ -x "${HOME}/.nix-profile/bin/nix" ]]; then
export PATH="${HOME}/.nix-profile/bin:${PATH}"
fi
# Single-user profile for dedicated "nix" user (container case)
if [[ -x /home/nix/.nix-profile/bin/nix ]]; then
export PATH="/home/nix/.nix-profile/bin:${PATH}"
fi
}
# ---------------------------------------------------------------------------
# Fast path: Nix already available
# ---------------------------------------------------------------------------
if command -v nix >/dev/null 2>&1; then
echo "[init-nix] Nix already available on PATH: $(command -v nix)"
exit 0
fi
# 2. Enable nix-daemon if available
if command -v systemctl >/dev/null 2>&1 && systemctl list-unit-files | grep -q nix-daemon.service; then
echo ">>> Enabling nix-daemon.service"
systemctl enable --now nix-daemon.service 2>/dev/null || true
ensure_nix_on_path
if command -v nix >/dev/null 2>&1; then
echo "[init-nix] Nix found after adjusting PATH: $(command -v nix)"
exit 0
fi
echo "[init-nix] Nix not found, starting installation logic..."
IN_CONTAINER=0
if is_container; then
IN_CONTAINER=1
echo "[init-nix] Detected container environment."
else
echo ">>> Warning: nix-daemon.service not found or systemctl not available."
echo "[init-nix] No container detected."
fi
# 3. Ensure nix-users group
if ! getent group nix-users >/dev/null 2>&1; then
echo ">>> Creating nix-users group"
groupadd -r nix-users 2>/dev/null || true
fi
# ---------------------------------------------------------------------------
# Container + root: install Nix as dedicated "nix" user (single-user)
# ---------------------------------------------------------------------------
if [[ "${IN_CONTAINER}" -eq 1 && "${EUID:-0}" -eq 0 ]]; then
echo "[init-nix] Running as root inside a container using dedicated 'nix' user."
# 4. Add users to nix-users (best-effort)
if command -v loginctl >/dev/null 2>&1; then
for user in $(loginctl list-users | awk 'NR>1 {print $2}'); do
if id "$user" >/dev/null 2>&1; then
echo ">>> Adding user '$user' to nix-users"
usermod -aG nix-users "$user" 2>/dev/null || true
# Ensure nixbld group (required by Nix)
if ! getent group nixbld >/dev/null 2>&1; then
echo "[init-nix] Creating group 'nixbld'..."
groupadd -r nixbld
fi
# Ensure Nix build users (nixbld1..nixbld10) as members of nixbld
for i in $(seq 1 10); do
if ! id "nixbld$i" >/dev/null 2>&1; then
echo "[init-nix] Creating build user nixbld$i..."
# -r: system account, -g: primary group, -G: supplementary (ensures membership is listed)
useradd -r -g nixbld -G nixbld -s /usr/sbin/nologin "nixbld$i"
fi
done
elif command -v logname >/dev/null 2>&1; then
USERNAME="$(logname 2>/dev/null || true)"
if [ -n "$USERNAME" ] && id "$USERNAME" >/dev/null 2>&1; then
echo ">>> Adding user '$USERNAME' to nix-users"
usermod -aG nix-users "$USERNAME" 2>/dev/null || true
# Ensure "nix" user (home at /home/nix)
if ! id nix >/dev/null 2>&1; then
echo "[init-nix] Creating user 'nix'..."
useradd -m -r -g nixbld -s /usr/bin/bash nix
fi
# Create /nix directory and hand it to nix user (prevents installer sudo prompt)
if [[ ! -d /nix ]]; then
echo "[init-nix] Creating /nix with owner nix:nixbld..."
mkdir -m 0755 /nix
chown nix:nixbld /nix
fi
# Run Nix single-user installer as "nix"
echo "[init-nix] Installing Nix as user 'nix' (single-user, --no-daemon)..."
if command -v sudo >/dev/null 2>&1; then
sudo -u nix bash -lc 'sh <(curl -L https://nixos.org/nix/install) --no-daemon'
else
su - nix -c 'sh <(curl -L https://nixos.org/nix/install) --no-daemon'
fi
# After installation, expose nix to root via PATH and symlink
ensure_nix_on_path
if [[ -x /home/nix/.nix-profile/bin/nix ]]; then
if [[ ! -e /usr/local/bin/nix ]]; then
echo "[init-nix] Creating /usr/local/bin/nix symlink -> /home/nix/.nix-profile/bin/nix"
ln -s /home/nix/.nix-profile/bin/nix /usr/local/bin/nix
fi
fi
ensure_nix_on_path
if command -v nix >/dev/null 2>&1; then
echo "[init-nix] Nix successfully installed (container mode) at: $(command -v nix)"
else
echo "[init-nix] WARNING: Nix installation finished in container, but 'nix' is still not on PATH."
fi
# Optionally add PATH hints to /etc/profile (best effort)
if [[ -w /etc/profile ]]; then
if ! grep -q 'Nix profiles' /etc/profile 2>/dev/null; then
cat <<'EOF' >> /etc/profile
# Nix profiles (added by package-manager init-nix.sh)
if [ -d /nix/var/nix/profiles/default/bin ]; then
PATH="/nix/var/nix/profiles/default/bin:$PATH"
fi
if [ -d "$HOME/.nix-profile/bin" ]; then
PATH="$HOME/.nix-profile/bin:$PATH"
fi
EOF
echo "[init-nix] Appended Nix PATH setup to /etc/profile (container mode)."
fi
fi
echo "[init-nix] Nix initialization complete (container root mode)."
exit 0
fi
# ---------------------------------------------------------------------------
# Non-container or non-root container: normal installer paths
# ---------------------------------------------------------------------------
if [[ "${IN_CONTAINER}" -eq 0 ]]; then
# Real host
if command -v systemctl >/dev/null 2>&1; then
echo "[init-nix] Host with systemd using multi-user install (--daemon)."
sh <(curl -L https://nixos.org/nix/install) --daemon
else
if [[ "${EUID:-0}" -eq 0 ]]; then
echo "[init-nix] WARNING: Running as root without systemd on host."
echo "[init-nix] Falling back to single-user install (--no-daemon), but this is not recommended."
sh <(curl -L https://nixos.org/nix/install) --no-daemon
else
echo "[init-nix] Non-root host without systemd using single-user install (--no-daemon)."
sh <(curl -L https://nixos.org/nix/install) --no-daemon
fi
fi
else
# Container, but not root (rare)
echo "[init-nix] Container as non-root user using single-user install (--no-daemon)."
sh <(curl -L https://nixos.org/nix/install) --no-daemon
fi
# ---------------------------------------------------------------------------
# After installation: fix PATH (runtime + shell profiles)
# ---------------------------------------------------------------------------
ensure_nix_on_path
if ! command -v nix >/dev/null 2>&1; then
echo "[init-nix] WARNING: Nix installation finished, but 'nix' is still not on PATH."
echo "[init-nix] You may need to source your shell profile manually."
exit 0
fi
echo "[init-nix] Nix successfully installed at: $(command -v nix)"
# Update global /etc/profile if writable (helps especially on minimal systems)
if [[ -w /etc/profile ]]; then
if ! grep -q 'Nix profiles' /etc/profile 2>/dev/null; then
cat <<'EOF' >> /etc/profile
# Nix profiles (added by package-manager init-nix.sh)
if [ -d /nix/var/nix/profiles/default/bin ]; then
PATH="/nix/var/nix/profiles/default/bin:$PATH"
fi
if [ -d "$HOME/.nix-profile/bin" ]; then
PATH="$HOME/.nix-profile/bin:$PATH"
fi
EOF
echo "[init-nix] Appended Nix PATH setup to /etc/profile"
fi
fi
echo ">>> Nix initialization complete."
echo ">>> You may need to log out and log back in to activate group membership."
echo "[init-nix] Nix initialization complete."

View File

@@ -0,0 +1,54 @@
#!/usr/bin/env bash
set -euo pipefail
# ------------------------------------------------------------
# aur-builder-setup.sh
#
# Setup helper for an 'aur_builder' user and yay on Arch-based
# systems. Intended for host usage and can also be used in
# containers if desired.
# ------------------------------------------------------------
echo "[aur-builder-setup] Checking for pacman..."
if ! command -v pacman >/dev/null 2>&1; then
echo "[aur-builder-setup] pacman not found this is not an Arch-based system. Skipping."
exit 0
fi
if [[ "${EUID:-0}" -ne 0 ]]; then
ROOT_CMD="sudo"
else
ROOT_CMD=""
fi
echo "[aur-builder-setup] Installing base-devel, git, sudo..."
${ROOT_CMD} pacman -Syu --noconfirm
${ROOT_CMD} pacman -S --needed --noconfirm base-devel git sudo
echo "[aur-builder-setup] Ensuring aur_builder group/user..."
if ! getent group aur_builder >/dev/null 2>&1; then
${ROOT_CMD} groupadd -r aur_builder
fi
if ! id -u aur_builder >/dev/null 2>&1; then
${ROOT_CMD} useradd -m -r -g aur_builder -s /bin/bash aur_builder
fi
echo "[aur-builder-setup] Configuring sudoers for aur_builder..."
${ROOT_CMD} bash -c "echo '%aur_builder ALL=(ALL) NOPASSWD: /usr/bin/pacman' > /etc/sudoers.d/aur_builder"
${ROOT_CMD} chmod 0440 /etc/sudoers.d/aur_builder
if command -v sudo >/dev/null 2>&1; then
RUN_AS_AUR=(sudo -u aur_builder bash -lc)
else
RUN_AS_AUR=(su - aur_builder -c)
fi
echo "[aur-builder-setup] Ensuring yay is installed for aur_builder..."
if ! "${RUN_AS_AUR[@]}" 'command -v yay >/dev/null 2>&1'; then
"${RUN_AS_AUR[@]}" 'cd ~ && rm -rf yay && git clone https://aur.archlinux.org/yay.git && cd yay && makepkg -si --noconfirm'
else
echo "[aur-builder-setup] yay already installed."
fi
echo "[aur-builder-setup] Done."

View File

@@ -0,0 +1,30 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
echo "[arch/dependencies] Installing Arch build dependencies..."
pacman -Syu --noconfirm
pacman -S --noconfirm --needed \
base-devel \
git \
rsync \
curl \
ca-certificates \
xz
pacman -Scc --noconfirm
# Always run AUR builder setup for Arch
AUR_SETUP="${SCRIPT_DIR}/aur-builder-setup.sh"
if [[ ! -x "${AUR_SETUP}" ]]; then
echo "[arch/dependencies] ERROR: AUR builder setup script not found or not executable: ${AUR_SETUP}"
exit 1
fi
echo "[arch/dependencies] Running AUR builder setup..."
bash "${AUR_SETUP}"
echo "[arch/dependencies] Done."

View File

@@ -0,0 +1,19 @@
#!/usr/bin/env bash
set -euo pipefail
echo "[arch/package] Building Arch package (makepkg --nodeps)..."
if id aur_builder >/dev/null 2>&1; then
echo "[arch/package] Using 'aur_builder' user for makepkg..."
chown -R aur_builder:aur_builder "$(pwd)"
su aur_builder -c "cd '$(pwd)' && rm -f package-manager-*.pkg.tar.* && makepkg --noconfirm --clean --nodeps"
else
echo "[arch/package] WARNING: user 'aur_builder' not found, running makepkg as current user..."
rm -f package-manager-*.pkg.tar.*
makepkg --noconfirm --clean --nodeps
fi
echo "[arch/package] Installing generated Arch package..."
pacman -U --noconfirm package-manager-*.pkg.tar.*
echo "[arch/package] Done."

View File

@@ -0,0 +1,20 @@
#!/usr/bin/env bash
set -euo pipefail
echo "[centos/dependencies] Installing CentOS build dependencies..."
dnf -y update
dnf -y install \
git \
rsync \
rpm-build \
make \
gcc \
bash \
curl-minimal \
ca-certificates \
xz
dnf clean all
echo "[centos/dependencies] Done."

View File

@@ -0,0 +1,46 @@
#!/usr/bin/env bash
set -euo pipefail
echo "[centos/package] Setting up rpmbuild directories..."
mkdir -p /root/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
echo "[centos/package] Extracting version from package-manager.spec..."
version="$(grep -E '^Version:' package-manager.spec | awk '{print $2}')"
if [[ -z "${version}" ]]; then
echo "ERROR: Version missing!"
exit 1
fi
srcdir="package-manager-${version}"
echo "[centos/package] Preparing source tree: ${srcdir}"
rm -rf "/tmp/${srcdir}"
mkdir -p "/tmp/${srcdir}"
cp -a . "/tmp/${srcdir}/"
echo "[centos/package] Creating source tarball..."
tar czf "/root/rpmbuild/SOURCES/${srcdir}.tar.gz" -C /tmp "${srcdir}"
echo "[centos/package] Copying SPEC..."
cp package-manager.spec /root/rpmbuild/SPECS/
echo "[centos/package] Running rpmbuild..."
cd /root/rpmbuild/SPECS
rpmbuild -bb package-manager.spec
echo "[centos/package] Installing generated RPM (local, offline, forced reinstall)..."
rpm_path="$(find /root/rpmbuild/RPMS -name 'package-manager-*.rpm' | head -n1)"
if [[ -z "${rpm_path}" ]]; then
echo "ERROR: RPM not found!"
exit 1
fi
# ------------------------------------------------------------
# Forced reinstall, always overwrite old version
# ------------------------------------------------------------
echo "[centos/package] Forcing reinstall via rpm -Uvh --replacepkgs --force"
rpm -Uvh --replacepkgs --force "${rpm_path}"
# Keep structure: remove temp directory afterwards
rm -rf "/tmp/${srcdir}"
echo "[centos/package] Done."

View File

@@ -0,0 +1,20 @@
#!/usr/bin/env bash
set -euo pipefail
echo "[debian/dependencies] Installing Debian build dependencies..."
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
build-essential \
debhelper \
dpkg-dev \
git \
rsync \
bash \
curl \
ca-certificates \
xz-utils
rm -rf /var/lib/apt/lists/*
echo "[debian/dependencies] Done."

View File

@@ -0,0 +1,13 @@
#!/usr/bin/env bash
set -euo pipefail
echo "[debian/package] Building Debian package..."
dpkg-buildpackage -us -uc -b
echo "[debian/package] Installing generated DEB package..."
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get install -y ./../package-manager_*.deb
rm -rf /var/lib/apt/lists/*
echo "[debian/package] Done."

View File

@@ -0,0 +1,21 @@
#!/usr/bin/env bash
set -euo pipefail
echo "[fedora/dependencies] Installing Fedora build dependencies..."
dnf -y update
dnf -y install \
git \
rsync \
rpm-build \
make \
gcc \
bash \
curl \
ca-certificates \
python3 \
xz
dnf clean all
echo "[fedora/dependencies] Done."

View File

@@ -0,0 +1,43 @@
#!/usr/bin/env bash
set -euo pipefail
echo "[fedora/package] Setting up rpmbuild directories..."
mkdir -p /root/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
echo "[fedora/package] Extracting version from package-manager.spec..."
version="$(grep -E '^Version:' package-manager.spec | awk '{print $2}')"
if [[ -z "${version}" ]]; then
echo "ERROR: Version missing!"
exit 1
fi
srcdir="package-manager-${version}"
echo "[fedora/package] Preparing source tree: ${srcdir}"
rm -rf "/tmp/${srcdir}"
mkdir -p "/tmp/${srcdir}"
cp -a . "/tmp/${srcdir}/"
echo "[fedora/package] Creating source tarball..."
tar czf "/root/rpmbuild/SOURCES/${srcdir}.tar.gz" -C /tmp "${srcdir}"
echo "[fedora/package] Copying SPEC..."
cp package-manager.spec /root/rpmbuild/SPECS/
echo "[fedora/package] Running rpmbuild..."
cd /root/rpmbuild/SPECS
rpmbuild -bb package-manager.spec
echo "[fedora/package] Installing generated RPM (local, offline, forced reinstall)..."
rpm_path="$(find /root/rpmbuild/RPMS -name 'package-manager-*.rpm' | head -n1)"
if [[ -z "${rpm_path}" ]]; then
echo "ERROR: RPM not found!"
exit 1
fi
# Always force (re)install the freshly built RPM, even if the same
# version is already installed. This is what we want in dev/test containers.
rpm -Uvh --replacepkgs --force "${rpm_path}"
rm -rf "/tmp/${srcdir}"
echo "[fedora/package] Done."

12
scripts/installation/lib.sh Executable file
View File

@@ -0,0 +1,12 @@
#!/usr/bin/env bash
set -euo pipefail
detect_os_id() {
if [[ -f /etc/os-release ]]; then
# shellcheck disable=SC1091
. /etc/os-release
echo "${ID:-unknown}"
else
echo "unknown"
fi
}

89
scripts/installation/main.sh Executable file
View File

@@ -0,0 +1,89 @@
#!/usr/bin/env bash
set -euo pipefail
# ------------------------------------------------------------
# main.sh
#
# Developer setup entrypoint.
#
# Responsibilities:
# - If inside a Nix shell (IN_NIX_SHELL=1):
# * Skip venv creation and dependency installation
# * Run `python3 main.py install`
# - Otherwise:
# * Create ~/.venvs/pkgmgr virtual environment if missing
# * Install Python dependencies into that venv
# * Append auto-activation to ~/.bashrc and ~/.zshrc
# * Run `main.py install` using the venv Python
# ------------------------------------------------------------
echo "[installation/main] Starting developer setup..."
PROJECT_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
cd "${PROJECT_ROOT}"
VENV_DIR="${HOME}/.venvs/pkgmgr"
RC_LINE='if [ -d "${HOME}/.venvs/pkgmgr" ]; then . "${HOME}/.venvs/pkgmgr/bin/activate"; if [ -n "${PS1:-}" ]; then echo "Global Python virtual environment '\''~/.venvs/pkgmgr'\'' activated."; fi; fi'
# ------------------------------------------------------------
# Nix shell mode: do not touch venv, only run main.py install
# ------------------------------------------------------------
if [[ -n "${IN_NIX_SHELL:-}" ]]; then
echo "[installation/main] Nix shell detected (IN_NIX_SHELL=1)."
echo "[installation/main] Skipping virtualenv creation and dependency installation."
echo "[installation/main] Running main.py install via system python3..."
python3 main.py install
echo "[installation/main] Developer setup finished (Nix mode)."
exit 0
fi
# ------------------------------------------------------------
# Normal host mode: create/update venv and run main.py install
# ------------------------------------------------------------
echo "[installation/main] Ensuring main.py is executable..."
chmod +x main.py || true
echo "[installation/main] Ensuring global virtualenv root: ${HOME}/.venvs"
mkdir -p "${HOME}/.venvs"
if [[ ! -d "${VENV_DIR}" ]]; then
echo "[installation/main] Creating virtual environment at: ${VENV_DIR}"
python3 -m venv "${VENV_DIR}"
else
echo "[installation/main] Virtual environment already exists at: ${VENV_DIR}"
fi
echo "[installation/main] Installing Python tooling into venv..."
"${VENV_DIR}/bin/python" -m ensurepip --upgrade
"${VENV_DIR}/bin/pip" install --upgrade pip setuptools wheel
if [[ -f "requirements.txt" ]]; then
echo "[installation/main] Installing dependencies from requirements.txt..."
"${VENV_DIR}/bin/pip" install -r requirements.txt
elif [[ -f "_requirements.txt" ]]; then
echo "[installation/main] Installing dependencies from _requirements.txt..."
"${VENV_DIR}/bin/pip" install -r _requirements.txt
else
echo "[installation/main] No requirements.txt or _requirements.txt found. Skipping dependency installation."
fi
echo "[installation/main] Ensuring ~/.bashrc and ~/.zshrc exist..."
touch "${HOME}/.bashrc" "${HOME}/.zshrc"
echo "[installation/main] Ensuring venv auto-activation is present in shell rc files..."
for rc in "${HOME}/.bashrc" "${HOME}/.zshrc"; do
if ! grep -qxF "${RC_LINE}" "$rc"; then
echo "${RC_LINE}" >> "$rc"
echo "[installation/main] Appended auto-activation to $rc"
else
echo "[installation/main] Auto-activation already present in $rc"
fi
done
echo "[installation/main] Running main.py install via venv Python..."
"${VENV_DIR}/bin/python" main.py install
echo
echo "[installation/main] Developer setup complete."
echo "Restart your shell (or run 'exec bash' or 'exec zsh') to activate the environment."

View File

@@ -0,0 +1,29 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=/dev/null
source "${SCRIPT_DIR}/lib.sh"
OS_ID="$(detect_os_id)"
echo "[run-dependencies] Detected OS: ${OS_ID}"
case "${OS_ID}" in
arch|debian|ubuntu|fedora|centos)
DEP_SCRIPT="${SCRIPT_DIR}/${OS_ID}/dependencies.sh"
;;
*)
echo "[run-dependencies] Unsupported OS: ${OS_ID}"
exit 1
;;
esac
if [[ ! -f "${DEP_SCRIPT}" ]]; then
echo "[run-dependencies] Dependency script not found: ${DEP_SCRIPT}"
exit 1
fi
echo "[run-dependencies] Executing: ${DEP_SCRIPT}"
exec bash "${DEP_SCRIPT}"

View File

@@ -0,0 +1,29 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=/dev/null
source "${SCRIPT_DIR}/lib.sh"
OS_ID="$(detect_os_id)"
echo "[run-package] Detected OS: ${OS_ID}"
case "${OS_ID}" in
arch|debian|ubuntu|fedora|centos)
PKG_SCRIPT="${SCRIPT_DIR}/${OS_ID}/package.sh"
;;
*)
echo "[run-package] Unsupported OS: ${OS_ID}"
exit 1
;;
esac
if [[ ! -f "${PKG_SCRIPT}" ]]; then
echo "[run-package] Package script not found: ${PKG_SCRIPT}"
exit 1
fi
echo "[run-package] Executing: ${PKG_SCRIPT}"
exec bash "${PKG_SCRIPT}"

View File

@@ -0,0 +1,22 @@
#!/usr/bin/env bash
set -euo pipefail
echo "[ubuntu/dependencies] Installing Ubuntu build dependencies..."
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
build-essential \
debhelper \
dpkg-dev \
git \
tzdata \
lsb-release \
rsync \
bash \
curl \
ca-certificates \
xz-utils
rm -rf /var/lib/apt/lists/*
echo "[ubuntu/dependencies] Done."

View File

@@ -0,0 +1,13 @@
#!/usr/bin/env bash
set -euo pipefail
echo "[ubuntu/package] Building Ubuntu (Debian-style) package..."
dpkg-buildpackage -us -uc -b
echo "[ubuntu/package] Installing generated DEB package..."
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get install -y ./../package-manager_*.deb
rm -rf /var/lib/apt/lists/*
echo "[ubuntu/package] Done."

32
scripts/pkgmgr-wrapper.sh Normal file → Executable file
View File

@@ -1,10 +1,40 @@
#!/usr/bin/env bash
set -euo pipefail
# Ensure NIX_CONFIG has our defaults if not already set
if [[ -z "${NIX_CONFIG:-}" ]]; then
export NIX_CONFIG="experimental-features = nix-command flakes"
fi
FLAKE_DIR="/usr/lib/package-manager"
exec nix run "${FLAKE_DIR}#pkgmgr" -- "$@"
# ------------------------------------------------------------
# Try to ensure that "nix" is on PATH
# ------------------------------------------------------------
if ! command -v nix >/dev/null 2>&1; then
# Common locations for Nix installations
CANDIDATES=(
"/nix/var/nix/profiles/default/bin/nix"
"${HOME:-/root}/.nix-profile/bin/nix"
)
for candidate in "${CANDIDATES[@]}"; do
if [[ -x "$candidate" ]]; then
# Prepend the directory of the candidate to PATH
PATH="$(dirname "$candidate"):${PATH}"
export PATH
break
fi
done
fi
# ------------------------------------------------------------
# Primary (and only) path: use Nix flake if available
# ------------------------------------------------------------
if command -v nix >/dev/null 2>&1; then
exec nix run "${FLAKE_DIR}#pkgmgr" -- "$@"
fi
echo "[pkgmgr-wrapper] ERROR: 'nix' binary not found on PATH."
echo "[pkgmgr-wrapper] Nix is required to run pkgmgr (no Python fallback)."
exit 1

12
scripts/test/test-common.sh Executable file
View File

@@ -0,0 +1,12 @@
#!/usr/bin/env bash
set -euo pipefail
detect_container_distro() {
if [[ -f /etc/os-release ]]; then
# shellcheck disable=SC1091
. /etc/os-release
echo "${ID:-unknown}"
else
echo "unknown"
fi
}

40
scripts/test/test-container.sh Executable file
View File

@@ -0,0 +1,40 @@
#!/usr/bin/env bash
set -euo pipefail
echo "============================================================"
echo ">>> Running sanity test: verifying test containers start"
echo "============================================================"
for distro in $DISTROS; do
IMAGE="package-manager-test-$distro"
echo
echo "------------------------------------------------------------"
echo ">>> Testing container: $IMAGE"
echo "------------------------------------------------------------"
echo "[test-container] Running: docker run --rm --entrypoint pkgmgr $IMAGE --help"
echo
# Run the command and capture the output
if OUTPUT=$(docker run --rm \
-e PKGMGR_DEV=1 \
-v "$(pwd):/src" \
-v "pkgmgr_nix_cache:/root/.cache/nix" \
"$IMAGE" 2>&1); then
echo "$OUTPUT"
echo
echo "[test-container] SUCCESS: $IMAGE responded to 'pkgmgr --help'"
else
echo "$OUTPUT"
echo
echo "[test-container] ERROR: $IMAGE failed to run 'pkgmgr --help'"
exit 1
fi
done
echo
echo "============================================================"
echo ">>> All containers passed the sanity check"
echo "============================================================"

56
scripts/test/test-e2e.sh Executable file
View File

@@ -0,0 +1,56 @@
#!/usr/bin/env bash
set -euo pipefail
echo ">>> Running E2E tests in all distros: $DISTROS"
for distro in $DISTROS; do
echo "============================================================"
echo ">>> Running E2E tests: $distro"
echo "============================================================"
MOUNT_NIX=""
if [[ "$distro" == "arch" ]]; then
MOUNT_NIX="-v pkgmgr_nix_store:/nix"
fi
docker run --rm \
-v "$(pwd):/src" \
$MOUNT_NIX \
-v "pkgmgr_nix_cache:/root/.cache/nix" \
-e PKGMGR_DEV=1 \
--workdir /src \
--entrypoint bash \
"package-manager-test-$distro" \
-c '
set -e;
if [ -f /etc/os-release ]; then
. /etc/os-release;
fi;
echo "Running tests inside distro: $ID";
# Try to load nix environment
if [ -f "/nix/var/nix/profiles/default/etc/profile.d/nix-daemon.sh" ]; then
. "/nix/var/nix/profiles/default/etc/profile.d/nix-daemon.sh";
fi
if [ -f "$HOME/.nix-profile/etc/profile.d/nix.sh" ]; then
. "$HOME/.nix-profile/etc/profile.d/nix.sh";
fi
PATH="/nix/var/nix/profiles/default/bin:$HOME/.nix-profile/bin:$PATH";
command -v nix >/dev/null || {
echo "ERROR: nix not found.";
exit 1;
}
git config --global --add safe.directory /src || true;
nix develop .#default --no-write-lock-file -c \
python3 -m unittest discover \
-s /src/tests/e2e \
-p "test_*.py";
'
done

View File

@@ -0,0 +1,23 @@
#!/usr/bin/env bash
set -euo pipefail
echo "============================================================"
echo ">>> Running INTEGRATION tests in Arch container"
echo "============================================================"
docker run --rm \
-v "$(pwd):/src" \
-v "pkgmgr_nix_cache:/root/.cache/nix" \
--workdir /src \
-e PKGMGR_DEV=1 \
--entrypoint bash \
"package-manager-test-arch" \
-c '
set -e;
git config --global --add safe.directory /src || true;
nix develop .#default --no-write-lock-file -c \
python -m unittest discover \
-s tests/integration \
-t /src \
-p "test_*.py";
'

23
scripts/test/test-unit.sh Executable file
View File

@@ -0,0 +1,23 @@
#!/usr/bin/env bash
set -euo pipefail
echo "============================================================"
echo ">>> Running UNIT tests in Arch container"
echo "============================================================"
docker run --rm \
-v "$(pwd):/src" \
-v "pkgmgr_nix_cache:/root/.cache/nix" \
--workdir /src \
-e PKGMGR_DEV=1 \
--entrypoint bash \
"package-manager-test-arch" \
-c '
set -e;
git config --global --add safe.directory /src || true;
nix develop .#default --no-write-lock-file -c \
python -m unittest discover \
-s tests/unit \
-t /src \
-p "test_*.py";
'

34
scripts/uninstall.sh Executable file
View File

@@ -0,0 +1,34 @@
#!/usr/bin/env bash
set -euo pipefail
echo "[uninstall] Starting pkgmgr uninstall..."
VENV_DIR="${HOME}/.venvs/pkgmgr"
# ------------------------------------------------------------
# Remove virtual environment
# ------------------------------------------------------------
echo "[uninstall] Removing global user virtual environment if it exists..."
if [[ -d "$VENV_DIR" ]]; then
rm -rf "$VENV_DIR"
echo "[uninstall] Removed: $VENV_DIR"
else
echo "[uninstall] No venv found at: $VENV_DIR"
fi
# ------------------------------------------------------------
# Remove auto-activation lines from shell RC files
# ------------------------------------------------------------
RC_PATTERN='\.venvs\/pkgmgr\/bin\/activate"; if \[ -n "\$${PS1:-}" \]; then echo "Global Python virtual environment '\''~\/\.venvs\/pkgmgr'\'' activated."; fi; fi'
echo "[uninstall] Cleaning up ~/.bashrc and ~/.zshrc entries..."
for rc in "$HOME/.bashrc" "$HOME/.zshrc"; do
if [[ -f "$rc" ]]; then
sed -i "/$RC_PATTERN/d" "$rc"
echo "[uninstall] Cleaned $rc"
else
echo "[uninstall] File not found: $rc (skipped)"
fi
done
echo "[uninstall] Done. Restart your shell (or run 'exec bash' or 'exec zsh') to apply changes."

View File

@@ -11,8 +11,8 @@ class TestIntegrationBranchCommands(unittest.TestCase):
Integration tests for the `pkgmgr branch` CLI wiring.
These tests execute the real entry point (main.py) and mock
the high-level `open_branch` helper to ensure that argument
parsing and dispatch behave as expected.
the high-level helpers to ensure that argument parsing and
dispatch behave as expected.
"""
def _run_pkgmgr(self, extra_args: list[str]) -> None:
@@ -64,6 +64,46 @@ class TestIntegrationBranchCommands(unittest.TestCase):
self.assertEqual(kwargs.get("base_branch"), "main")
self.assertEqual(kwargs.get("cwd"), ".")
# ------------------------------------------------------------------
# close subcommand
# ------------------------------------------------------------------
@patch("pkgmgr.cli_core.commands.branch.close_branch")
def test_branch_close_with_name_and_base(self, mock_close_branch) -> None:
"""
`pkgmgr branch close feature/test --base develop` must forward
the name and base branch to close_branch() with cwd=".".
"""
self._run_pkgmgr(
["branch", "close", "feature/test", "--base", "develop"]
)
mock_close_branch.assert_called_once()
_, kwargs = mock_close_branch.call_args
self.assertEqual(kwargs.get("name"), "feature/test")
self.assertEqual(kwargs.get("base_branch"), "develop")
self.assertEqual(kwargs.get("cwd"), ".")
@patch("pkgmgr.cli_core.commands.branch.close_branch")
def test_branch_close_without_name_uses_default_base(
self,
mock_close_branch,
) -> None:
"""
`pkgmgr branch close` without a name must still call close_branch(),
passing name=None and the default base branch 'main'.
The branch helper will then resolve the actual base (main/master)
internally.
"""
self._run_pkgmgr(["branch", "close"])
mock_close_branch.assert_called_once()
_, kwargs = mock_close_branch.call_args
self.assertIsNone(kwargs.get("name"))
self.assertEqual(kwargs.get("base_branch"), "main")
self.assertEqual(kwargs.get("cwd"), ".")
if __name__ == "__main__":
unittest.main()

View File

@@ -1,99 +1,137 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
End-to-end style integration tests for the `pkgmgr release` CLI command.
These tests exercise the real top-level entry point (main.py) and mock
the high-level helper used by the CLI wiring
(pkgmgr.cli_core.commands.release.run_release) to ensure that argument
parsing and dispatch behave as expected, in particular the new `close`
flag.
The tests simulate real CLI calls like:
pkgmgr release minor --preview --close
by manipulating sys.argv and executing main.py as __main__ via runpy.
"""
from __future__ import annotations
import os
import runpy
import sys
import unittest
PROJECT_ROOT = os.path.abspath(
os.path.join(os.path.dirname(__file__), "..", "..")
)
from unittest.mock import patch
class TestIntegrationReleaseCommand(unittest.TestCase):
def _run_pkgmgr(
self,
argv: list[str],
expect_success: bool,
) -> None:
"""
Run the main entry point with the given argv and assert on success/failure.
"""Integration tests for `pkgmgr release` wiring."""
argv must include the program name as argv[0], e.g. "":
["", "release", "patch", "pkgmgr", "--preview"]
def _run_pkgmgr(self, extra_args: list[str]) -> None:
"""
Helper to invoke the `pkgmgr` console script via the real
entry point (main.py).
This simulates a real CLI call like:
pkgmgr <extra_args...>
by setting sys.argv accordingly and executing main.py as
__main__ using runpy.run_module.
"""
cmd_repr = " ".join(argv[1:])
original_argv = list(sys.argv)
try:
sys.argv = argv
try:
# Execute main.py as if called via `python main.py ...`
runpy.run_module("main", run_name="__main__")
except SystemExit as exc:
code = exc.code if isinstance(exc.code, int) else 1
if expect_success and code != 0:
print()
print(f"[TEST] Command : {cmd_repr}")
print(f"[TEST] Exit code : {code}")
raise AssertionError(
f"{cmd_repr!r} failed with exit code {code}. "
"Scroll up to inspect the output printed before failure."
) from exc
if not expect_success and code == 0:
print()
print(f"[TEST] Command : {cmd_repr}")
print(f"[TEST] Exit code : {code}")
raise AssertionError(
f"{cmd_repr!r} unexpectedly succeeded with exit code 0."
) from exc
else:
# No SystemExit: treat as success when expect_success is True,
# otherwise as a failure (we expected a non-zero exit).
if not expect_success:
raise AssertionError(
f"{cmd_repr!r} returned normally (expected non-zero exit)."
)
# argv[0] is the program name; the rest are CLI arguments.
sys.argv = ["pkgmgr"] + list(extra_args)
runpy.run_module("main", run_name="__main__")
finally:
sys.argv = original_argv
def test_release_for_unknown_repo_fails_cleanly(self) -> None:
"""
Releasing a non-existent repository identifier must fail
with a non-zero exit code, but without crashing the interpreter.
"""
argv = [
"",
"release",
"patch",
"does-not-exist-xyz",
]
self._run_pkgmgr(argv, expect_success=False)
# ------------------------------------------------------------------
# Behaviour without --close
# ------------------------------------------------------------------
def test_release_preview_for_pkgmgr_repository(self) -> None:
@patch("pkgmgr.cli_core.commands.release.run_release")
@patch("pkgmgr.cli_core.dispatch._select_repo_for_current_directory")
def test_release_without_close_flag(
self,
mock_select_repo,
mock_run_release,
) -> None:
"""
Sanity-check the happy path for the CLI:
- Runs `pkgmgr release patch pkgmgr --preview`
- Must exit with code 0
- Uses the real configuration + repository selection
- Exercises the new --preview mode end-to-end.
Calling `pkgmgr release patch --preview` should *not* enable
the `close` flag by default.
"""
argv = [
"",
"release",
"patch",
"pkgmgr",
"--preview",
# Ensure that the dispatch layer always selects a repository,
# independent of any real config in the test environment.
mock_select_repo.return_value = [
{
"directory": ".",
"provider": "local",
"account": "test",
"repository": "dummy",
}
]
original_cwd = os.getcwd()
try:
os.chdir(PROJECT_ROOT)
self._run_pkgmgr(argv, expect_success=True)
finally:
os.chdir(original_cwd)
self._run_pkgmgr(["release", "patch", "--preview"])
mock_run_release.assert_called_once()
_args, kwargs = mock_run_release.call_args
# CLI wiring
self.assertEqual(kwargs.get("release_type"), "patch")
self.assertTrue(
kwargs.get("preview"),
"preview should be True when --preview is used",
)
# Default: no --close → close=False
self.assertFalse(
kwargs.get("close"),
"close must be False when --close is not given",
)
# ------------------------------------------------------------------
# Behaviour with --close
# ------------------------------------------------------------------
@patch("pkgmgr.cli_core.commands.release.run_release")
@patch("pkgmgr.cli_core.dispatch._select_repo_for_current_directory")
def test_release_with_close_flag(
self,
mock_select_repo,
mock_run_release,
) -> None:
"""
Calling `pkgmgr release minor --preview --close` should pass
close=True into the helper used by the CLI wiring.
"""
# Again: make sure there is always a selected repository.
mock_select_repo.return_value = [
{
"directory": ".",
"provider": "local",
"account": "test",
"repository": "dummy",
}
]
self._run_pkgmgr(["release", "minor", "--preview", "--close"])
mock_run_release.assert_called_once()
_args, kwargs = mock_run_release.call_args
# CLI wiring
self.assertEqual(kwargs.get("release_type"), "minor")
self.assertTrue(
kwargs.get("preview"),
"preview should be True when --preview is used",
)
# With --close → close=True
self.assertTrue(
kwargs.get("close"),
"close must be True when --close is given",
)
if __name__ == "__main__":

View File

View File

@@ -0,0 +1,206 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Unit tests for pkgmgr.cli_core.commands.release.
These tests focus on the wiring layer:
- Argument handling for the release command as defined by the
top-level parser (cli_core.parser.create_parser).
- Correct invocation of pkgmgr.release.release(...) for the
selected repositories.
- Behaviour of --preview, --list, --close, and -f/--force.
"""
from __future__ import annotations
from types import SimpleNamespace
from typing import List
from unittest.mock import patch, call
import argparse
import unittest
class TestReleaseCommand(unittest.TestCase):
"""
Tests for the `pkgmgr release` CLI wiring.
"""
def _make_ctx(self, all_repos: List[dict]) -> SimpleNamespace:
"""
Create a minimal CLIContext-like object for tests.
Only the attributes that handle_release() uses are provided.
"""
return SimpleNamespace(
config_merged={},
repositories_base_dir="/base/dir",
all_repositories=all_repos,
binaries_dir="/bin",
user_config_path="/tmp/config.yaml",
)
def _parse_release_args(self, argv: List[str]) -> argparse.Namespace:
"""
Build a real top-level parser and parse the given argv list
to obtain the Namespace for the `release` command.
"""
from pkgmgr.cli_core.parser import create_parser
parser = create_parser("test parser")
args = parser.parse_args(argv)
self.assertEqual(args.command, "release")
return args
@patch("pkgmgr.cli_core.commands.release.os.path.isdir", return_value=True)
@patch("pkgmgr.cli_core.commands.release.run_release")
@patch("pkgmgr.cli_core.commands.release.get_repo_dir")
@patch("pkgmgr.cli_core.commands.release.get_repo_identifier")
@patch("pkgmgr.cli_core.commands.release.os.chdir")
@patch("pkgmgr.cli_core.commands.release.os.getcwd", return_value="/cwd")
def test_release_with_close_and_message(
self,
mock_getcwd,
mock_chdir,
mock_get_repo_identifier,
mock_get_repo_dir,
mock_run_release,
mock_isdir,
) -> None:
"""
The release handler should call pkgmgr.release.release() with:
- release_type (e.g. minor)
- provided message
- preview flag
- force flag
- close flag
It must change into the repository directory and then back.
"""
from pkgmgr.cli_core.commands.release import handle_release
repo = {"name": "dummy-repo"}
selected = [repo]
ctx = self._make_ctx(selected)
mock_get_repo_identifier.return_value = "dummy-id"
mock_get_repo_dir.return_value = "/repos/dummy"
argv = [
"release",
"minor",
"dummy-id",
"-m",
"Close branch after minor release",
"--close",
"-f",
]
args = self._parse_release_args(argv)
handle_release(args, ctx, selected)
# We should have changed into the repo dir and then back.
mock_chdir.assert_has_calls(
[call("/repos/dummy"), call("/cwd")]
)
# And run_release should be invoked once with the expected parameters.
mock_run_release.assert_called_once_with(
pyproject_path="pyproject.toml",
changelog_path="CHANGELOG.md",
release_type="minor",
message="Close branch after minor release",
preview=False,
force=True,
close=True,
)
@patch("pkgmgr.cli_core.commands.release.os.path.isdir", return_value=True)
@patch("pkgmgr.cli_core.commands.release.run_release")
@patch("pkgmgr.cli_core.commands.release.get_repo_dir")
@patch("pkgmgr.cli_core.commands.release.get_repo_identifier")
@patch("pkgmgr.cli_core.commands.release.os.chdir")
@patch("pkgmgr.cli_core.commands.release.os.getcwd", return_value="/cwd")
def test_release_preview_mode(
self,
mock_getcwd,
mock_chdir,
mock_get_repo_identifier,
mock_get_repo_dir,
mock_run_release,
mock_isdir,
) -> None:
"""
In preview mode, the handler should pass preview=True to the
release helper and force=False by default.
"""
from pkgmgr.cli_core.commands.release import handle_release
repo = {"name": "dummy-repo"}
selected = [repo]
ctx = self._make_ctx(selected)
mock_get_repo_identifier.return_value = "dummy-id"
mock_get_repo_dir.return_value = "/repos/dummy"
argv = [
"release",
"patch",
"dummy-id",
"--preview",
]
args = self._parse_release_args(argv)
handle_release(args, ctx, selected)
mock_run_release.assert_called_once_with(
pyproject_path="pyproject.toml",
changelog_path="CHANGELOG.md",
release_type="patch",
message=None,
preview=True,
force=False,
close=False,
)
@patch("pkgmgr.cli_core.commands.release.run_release")
@patch("pkgmgr.cli_core.commands.release.get_repo_dir")
@patch("pkgmgr.cli_core.commands.release.get_repo_identifier")
def test_release_list_mode_does_not_invoke_helper(
self,
mock_get_repo_identifier,
mock_get_repo_dir,
mock_run_release,
) -> None:
"""
When --list is provided, the handler should print the list of affected
repositories and must NOT invoke run_release().
"""
from pkgmgr.cli_core.commands.release import handle_release
repo1 = {"name": "repo-1"}
repo2 = {"name": "repo-2"}
selected = [repo1, repo2]
ctx = self._make_ctx(selected)
mock_get_repo_identifier.side_effect = ["id-1", "id-2"]
argv = [
"release",
"major",
"--list",
]
args = self._parse_release_args(argv)
handle_release(args, ctx, selected)
mock_run_release.assert_not_called()
self.assertEqual(
mock_get_repo_identifier.call_args_list,
[call(repo1, selected), call(repo2, selected)],
)
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,112 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Unit tests for the `pkgmgr branch` CLI wiring.
These tests verify that:
- The argument parser creates the correct structure for
`branch open` and `branch close`.
- `handle_branch` calls the corresponding helper functions
with the expected arguments (including base branch and cwd).
"""
from __future__ import annotations
import unittest
from unittest.mock import patch
from pkgmgr.cli_core.parser import create_parser
from pkgmgr.cli_core.commands.branch import handle_branch
class TestBranchCLI(unittest.TestCase):
"""
Tests for the branch subcommands implemented in cli_core.
"""
def _create_parser(self):
"""
Create the top-level parser with a minimal description.
"""
return create_parser("pkgmgr test parser")
@patch("pkgmgr.cli_core.commands.branch.open_branch")
def test_branch_open_with_name_and_base(self, mock_open_branch):
"""
Ensure that `pkgmgr branch open <name> --base <branch>` calls
open_branch() with the correct parameters.
"""
parser = self._create_parser()
args = parser.parse_args(
["branch", "open", "feature/test-branch", "--base", "develop"]
)
# Sanity check: parser wiring
self.assertEqual(args.command, "branch")
self.assertEqual(args.subcommand, "open")
self.assertEqual(args.name, "feature/test-branch")
self.assertEqual(args.base, "develop")
# ctx is currently unused by handle_branch, so we can pass None
handle_branch(args, ctx=None)
mock_open_branch.assert_called_once()
_args, kwargs = mock_open_branch.call_args
self.assertEqual(kwargs.get("name"), "feature/test-branch")
self.assertEqual(kwargs.get("base_branch"), "develop")
self.assertEqual(kwargs.get("cwd"), ".")
@patch("pkgmgr.cli_core.commands.branch.close_branch")
def test_branch_close_with_name_and_base(self, mock_close_branch):
"""
Ensure that `pkgmgr branch close <name> --base <branch>` calls
close_branch() with the correct parameters.
"""
parser = self._create_parser()
args = parser.parse_args(
["branch", "close", "feature/old-branch", "--base", "main"]
)
# Sanity check: parser wiring
self.assertEqual(args.command, "branch")
self.assertEqual(args.subcommand, "close")
self.assertEqual(args.name, "feature/old-branch")
self.assertEqual(args.base, "main")
handle_branch(args, ctx=None)
mock_close_branch.assert_called_once()
_args, kwargs = mock_close_branch.call_args
self.assertEqual(kwargs.get("name"), "feature/old-branch")
self.assertEqual(kwargs.get("base_branch"), "main")
self.assertEqual(kwargs.get("cwd"), ".")
@patch("pkgmgr.cli_core.commands.branch.close_branch")
def test_branch_close_without_name_uses_none(self, mock_close_branch):
"""
Ensure that `pkgmgr branch close` without a name passes name=None
into close_branch(), leaving branch resolution to the helper.
"""
parser = self._create_parser()
args = parser.parse_args(["branch", "close"])
# Parser wiring: no name → None
self.assertEqual(args.command, "branch")
self.assertEqual(args.subcommand, "close")
self.assertIsNone(args.name)
handle_branch(args, ctx=None)
mock_close_branch.assert_called_once()
_args, kwargs = mock_close_branch.call_args
self.assertIsNone(kwargs.get("name"))
self.assertEqual(kwargs.get("base_branch"), "main")
self.assertEqual(kwargs.get("cwd"), ".")
if __name__ == "__main__":
unittest.main()

View File

@@ -42,7 +42,7 @@ def _fake_config() -> Dict[str, Any]:
"workspaces": "/tmp/pkgmgr-workspaces",
},
# The actual list of repositories is not used directly by the tests,
# because we mock get_selected_repos(). It must exist, though.
# because we mock the selection logic. It must exist, though.
"repositories": [],
}
@@ -54,8 +54,9 @@ class TestCliVersion(unittest.TestCase):
Each test:
- Runs in a temporary working directory.
- Uses a fake configuration via load_config().
- Uses a mocked get_selected_repos() that returns a single repo
pointing to the temporary directory.
- Uses the same selection logic as the new CLI:
* dispatch_command() calls _select_repo_for_current_directory()
when there is no explicit selection.
"""
def setUp(self) -> None:
@@ -64,31 +65,30 @@ class TestCliVersion(unittest.TestCase):
self._old_cwd = os.getcwd()
os.chdir(self._tmp_dir.name)
# Define a fake repo pointing to our temp dir
self._fake_repo = {
"provider": "github.com",
"account": "test",
"repository": "pkgmgr-test",
"directory": self._tmp_dir.name,
}
# Patch load_config so cli.main() does not read real config files
self._patch_load_config = mock.patch(
"pkgmgr.cli.load_config", return_value=_fake_config()
)
self.mock_load_config = self._patch_load_config.start()
# Patch get_selected_repos so that 'version' operates on our temp dir.
# In the new modular CLI this function is used inside
# pkgmgr.cli_core.dispatch, so we patch it there.
def _fake_selected_repos(args, all_repositories):
return [
{
"provider": "github.com",
"account": "test",
"repository": "pkgmgr-test",
"directory": self._tmp_dir.name,
}
]
self._patch_get_selected_repos = mock.patch(
"pkgmgr.cli_core.dispatch.get_selected_repos",
side_effect=_fake_selected_repos,
# Patch the "current directory" selection used by dispatch_command().
# This matches the new behaviour: without explicit identifiers,
# version uses _select_repo_for_current_directory(ctx).
self._patch_select_repo_for_current_directory = mock.patch(
"pkgmgr.cli_core.dispatch._select_repo_for_current_directory",
return_value=[self._fake_repo],
)
self.mock_select_repo_for_current_directory = (
self._patch_select_repo_for_current_directory.start()
)
self.mock_get_selected_repos = self._patch_get_selected_repos.start()
# Keep a reference to the original sys.argv, so we can restore it
self._old_argv = list(sys.argv)
@@ -98,7 +98,7 @@ class TestCliVersion(unittest.TestCase):
sys.argv = self._old_argv
# Stop all patches
self._patch_get_selected_repos.stop()
self._patch_select_repo_for_current_directory.stop()
self._patch_load_config.stop()
# Restore working directory
@@ -224,7 +224,7 @@ class TestCliVersion(unittest.TestCase):
# Arrange: pyproject.toml exists
self._write_pyproject("0.0.1")
# Arrange: no tags returned (again: patch handle_version's get_tags)
# Arrange: no tags returned
with mock.patch(
"pkgmgr.cli_core.commands.version.get_tags",
return_value=[],

View File

@@ -66,6 +66,55 @@ class TestCliBranch(unittest.TestCase):
self.assertEqual(call_kwargs.get("base_branch"), "main")
self.assertEqual(call_kwargs.get("cwd"), ".")
# ------------------------------------------------------------------
# close subcommand
# ------------------------------------------------------------------
@patch("pkgmgr.cli_core.commands.branch.close_branch")
def test_handle_branch_close_forwards_args_to_close_branch(self, mock_close_branch) -> None:
"""
handle_branch('close') should call close_branch with name, base and cwd='.'.
"""
args = SimpleNamespace(
command="branch",
subcommand="close",
name="feature/cli-close",
base="develop",
)
ctx = self._dummy_ctx()
handle_branch(args, ctx)
mock_close_branch.assert_called_once()
_, call_kwargs = mock_close_branch.call_args
self.assertEqual(call_kwargs.get("name"), "feature/cli-close")
self.assertEqual(call_kwargs.get("base_branch"), "develop")
self.assertEqual(call_kwargs.get("cwd"), ".")
@patch("pkgmgr.cli_core.commands.branch.close_branch")
def test_handle_branch_close_uses_default_base_when_not_set(self, mock_close_branch) -> None:
"""
If --base is not passed for 'close', argparse gives base='main'
(default), and handle_branch should propagate that to close_branch.
"""
args = SimpleNamespace(
command="branch",
subcommand="close",
name=None,
base="main",
)
ctx = self._dummy_ctx()
handle_branch(args, ctx)
mock_close_branch.assert_called_once()
_, call_kwargs = mock_close_branch.call_args
self.assertIsNone(call_kwargs.get("name"))
self.assertEqual(call_kwargs.get("base_branch"), "main")
self.assertEqual(call_kwargs.get("cwd"), ".")
def test_handle_branch_unknown_subcommand_exits_with_code_2(self) -> None:
"""
Unknown branch subcommand should result in SystemExit(2).

View File

@@ -365,6 +365,7 @@ class TestUpdateDebianChangelog(unittest.TestCase):
class TestReleaseOrchestration(unittest.TestCase):
@patch("pkgmgr.release.sys.stdin.isatty", return_value=False)
@patch("pkgmgr.release._run_git_command")
@patch("pkgmgr.release.update_debian_changelog")
@patch("pkgmgr.release.update_spec_version")
@@ -387,6 +388,7 @@ class TestReleaseOrchestration(unittest.TestCase):
mock_update_spec,
mock_update_debian_changelog,
mock_run_git_command,
mock_isatty,
) -> None:
mock_determine_current_version.return_value = SemVer(1, 2, 3)
mock_bump_semver.return_value = SemVer(1, 2, 4)
@@ -449,6 +451,7 @@ class TestReleaseOrchestration(unittest.TestCase):
self.assertIn("git push origin develop", git_calls)
self.assertIn("git push origin --tags", git_calls)
@patch("pkgmgr.release.sys.stdin.isatty", return_value=False)
@patch("pkgmgr.release._run_git_command")
@patch("pkgmgr.release.update_debian_changelog")
@patch("pkgmgr.release.update_spec_version")
@@ -471,6 +474,7 @@ class TestReleaseOrchestration(unittest.TestCase):
mock_update_spec,
mock_update_debian_changelog,
mock_run_git_command,
mock_isatty,
) -> None:
mock_determine_current_version.return_value = SemVer(1, 2, 3)
mock_bump_semver.return_value = SemVer(1, 2, 4)