Compare commits

...

8 Commits

Author SHA1 Message Date
Kevin Veen-Birkenbach
22efe0b32e Release version 0.7.13 2025-12-10 10:27:27 +01:00
Kevin Veen-Birkenbach
d23a0a94d5 Fix tools path resolution and add tests
- Use _resolve_repository_path() for explore, terminal and code commands
  so tools no longer rely on a 'directory' key in the repository dict.
- Fall back to repositories_base_dir/repositories_dir via get_repo_dir()
  when no explicit path-like key is present.
- Make VS Code workspace creation more robust (safe default for
  directories.workspaces and UTF-8 when writing JSON).
- Add unit tests for handle_tools_command (explore, terminal, code) under
  tests/unit/pkgmgr/cli/commands/test_tools.py.
- Add E2E/integration-style tests for the tools subcommands' --help
  output under tests/e2e/test_tools_help.py, treating SystemExit(0) as
  success.

This change fixes the KeyError: 'directory' when running 'pkgmgr code'
and verifies the behavior via unit and integration tests.

https://chatgpt.com/share/69393ca1-b554-800f-9967-abf8c4e3fea3
2025-12-10 10:25:29 +01:00
Kevin Veen-Birkenbach
e42b79c9d8 Add E2E tests for 'clone --all' and 'update --all' using HTTPS mode
This commit introduces two new end-to-end integration tests:

  • tests/e2e/test_clone_all.py
      Runs: pkgmgr clone --all --clone-mode https --no-verification
      Verifies that full HTTPS cloning of all configured repositories
      works inside the test container environment.

  • tests/e2e/test_update_all.py
      Runs: pkgmgr update --all --clone-mode https --no-verification
      Ensures that updating all repositories with HTTPS mode completes
      successfully without raising exceptions.

Both tests:
  - Provide extended diagnostics on SystemExit
  - Reuse nix-profile cleanup helpers for consistent test environments
  - Validate that `pkgmgr --help` works after execution

These tests complement the existing shallow-install integration test
and improve overall reliability of HTTPS clone/update workflows.
2025-12-09 23:47:43 +01:00
Kevin Veen-Birkenbach
3b2c657bfa Release version 0.7.12 2025-12-09 23:36:38 +01:00
Kevin Veen-Birkenbach
e335ab05a1 fix(core/ink): prevent self-referential symlinks + add unit tests
This commit adds a safety guard to create_ink() to prevent creation of
self-referential symlinks when the resolved command already lives at the
intended link target (e.g. ~/.local/bin/package-manager). Such a situation
previously resulted in broken shells with the error:

    "zsh: too many levels of symbolic links"

Key changes:
  - create_ink():
      • Introduce early-abort guard when command == link_path
      • Improve function signature and formatting
      • Enhance alias creation messaging

  - Added comprehensive unit tests under:
        tests/unit/pkgmgr/core/command/test_ink.py
    Tests cover:
      • Self-referential command path → skip symlink creation
      • Standard symlink + alias creation behaviour

This prevents pkgmgr from overwriting user-managed binaries inside ~/.local/bin
and ensures predictable, safe behaviour across all installer layers.

https://chatgpt.com/share/6938a43b-0eb8-800f-9545-6cb555ab406d
2025-12-09 23:35:29 +01:00
Kevin Veen-Birkenbach
75f963d6e2 Removed tests/e2e/test_install_all_shallow.py 2025-12-09 23:18:49 +01:00
Kevin Veen-Birkenbach
94b998741f Release version 0.7.11 2025-12-09 23:16:48 +01:00
Kevin Veen-Birkenbach
172c734866 test: fix installer unit tests for OS packages and Nix dev shell
Update Debian, RPM, Nix flake, and Python installer unit tests to match the current
installer behavior and to run correctly inside the Nix development shell.

- DebianControlInstaller:
  - Add clearer docstrings for supports() behavior.
  - Relax final install assertion to accept dpkg -i, sudo dpkg -i, or
    sudo apt-get install -y.
  - Keep checks for apt-get update, apt-get build-dep, and dpkg-buildpackage.

- RpmSpecInstaller:
  - Add docstrings for supports() conditions.
  - Mock _prepare_source_tarball() to avoid touching the filesystem.
  - Assert builddep, rpmbuild -ba, and sudo dnf install -y commands.

- NixFlakeInstaller:
  - Ensure supports() and run() tests simulate a non-Nix-shell environment
    via IN_NIX_SHELL and PKGMGR_DISABLE_NIX_FLAKE_INSTALLER.
  - Verify that the old profile entry is removed and both pkgmgr and default
    flake outputs are installed.
  - Confirm _ensure_old_profile_removed() swallows SystemExit.

- PythonInstaller:
  - Make supports() and run() tests ignore the real IN_NIX_SHELL environment.
  - Assert that pip install . is invoked with cwd set to the repository
    directory.

These changes make the unit tests stable in the Nix dev shell and align them
with the current installer implementations.
2025-12-09 23:15:56 +01:00
18 changed files with 760 additions and 89 deletions

View File

@@ -1,3 +1,18 @@
## [0.7.13] - 2025-12-10
* Automated release.
## [0.7.12] - 2025-12-09
* Fixed self refering alias during setup
## [0.7.11] - 2025-12-09
* test: fix installer unit tests for OS packages and Nix dev shell
## [0.7.10] - 2025-12-09
* Fixed test_install_pkgmgr_shallow.py

View File

@@ -1,7 +1,7 @@
# Maintainer: Kevin Veen-Birkenbach <info@veen.world>
pkgname=package-manager
pkgver=0.7.10
pkgver=0.7.13
pkgrel=1
pkgdesc="Local-flake wrapper for Kevin's package-manager (Nix-based)."
arch=('any')

18
debian/changelog vendored
View File

@@ -1,3 +1,21 @@
package-manager (0.7.13-1) unstable; urgency=medium
* Automated release.
-- Kevin Veen-Birkenbach <kevin@veen.world> Wed, 10 Dec 2025 10:27:24 +0100
package-manager (0.7.12-1) unstable; urgency=medium
* Fixed self refering alias during setup
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 23:36:35 +0100
package-manager (0.7.11-1) unstable; urgency=medium
* test: fix installer unit tests for OS packages and Nix dev shell
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 09 Dec 2025 23:16:46 +0100
package-manager (0.7.10-1) unstable; urgency=medium
* Fixed test_install_pkgmgr_shallow.py

View File

@@ -31,7 +31,7 @@
rec {
pkgmgr = pyPkgs.buildPythonApplication {
pname = "package-manager";
version = "0.7.10";
version = "0.7.13";
# Use the git repo as source
src = ./.;

View File

@@ -1,5 +1,5 @@
Name: package-manager
Version: 0.7.10
Version: 0.7.13
Release: 1%{?dist}
Summary: Wrapper that runs Kevin's package-manager via Nix flake
@@ -77,6 +77,15 @@ echo ">>> package-manager removed. Nix itself was not removed."
/usr/lib/package-manager/
%changelog
* Wed Dec 10 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.13-1
- Automated release.
* Tue Dec 09 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.12-1
- Fixed self refering alias during setup
* Tue Dec 09 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.11-1
- test: fix installer unit tests for OS packages and Nix dev shell
* Tue Dec 09 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 0.7.10-1
- Fixed test_install_pkgmgr_shallow.py

View File

@@ -1,57 +1,84 @@
from __future__ import annotations
from __future__ import annotations
import json
import os
import json
import os
from typing import Any, Dict, List
from typing import Any, Dict, List
from pkgmgr.cli.context import CLIContext
from pkgmgr.core.command.run import run_command
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr .cli .context import CLIContext
from pkgmgr .core .command .run import run_command
from pkgmgr .core .repository .identifier import get_repo_identifier
from pkgmgr .core .repository .dir import get_repo_dir
Repository = Dict[str, Any]
def _resolve_repository_path(repository: Repository, ctx: CLIContext) -> str:
"""
Resolve the filesystem path for a repository.
Priority:
1. Use explicit keys if present (directory / path / workspace / workspace_dir).
2. Fallback to get_repo_dir(...) using the repositories base directory
from the CLI context.
"""
# 1) Explicit path-like keys on the repository object
for key in ("directory", "path", "workspace", "workspace_dir"):
value = repository.get(key)
if value:
return value
# 2) Fallback: compute from base dir + repository metadata
base_dir = (
getattr(ctx, "repositories_base_dir", None)
or getattr(ctx, "repositories_dir", None)
)
if not base_dir:
raise RuntimeError(
"Cannot resolve repositories base directory from context; "
"expected ctx.repositories_base_dir or ctx.repositories_dir."
)
return get_repo_dir(base_dir, repository)
def handle_tools_command(
args,
ctx: CLIContext,
selected: List[Repository],
) -> None:
"""
Handle integration commands:
- explore (file manager)
- terminal (GNOME Terminal)
- code (VS Code workspace)
"""
# --------------------------------------------------------
# explore
# --------------------------------------------------------
# ------------------------------------------------------------------
# nautilus "explore" command
# ------------------------------------------------------------------
if args.command == "explore":
for repository in selected:
repo_path = _resolve_repository_path(repository, ctx)
run_command(
f"nautilus {repository['directory']} & disown"
f'nautilus "{repo_path}" & disown'
)
return
return
# --------------------------------------------------------
# terminal
# --------------------------------------------------------
# ------------------------------------------------------------------
# GNOME terminal command
# ------------------------------------------------------------------
if args.command == "terminal":
for repository in selected:
repo_path = _resolve_repository_path(repository, ctx)
run_command(
f'gnome-terminal --tab --working-directory="{repository["directory"]}"'
f'gnome-terminal --tab --working-directory="{repo_path}"'
)
return
return
# --------------------------------------------------------
# code
# --------------------------------------------------------
# ------------------------------------------------------------------
# VS Code workspace command
# ------------------------------------------------------------------
if args.command == "code":
if not selected:
print("No repositories selected.")
return
return
identifiers = [
get_repo_identifier(repo, ctx.all_repositories)
@@ -60,20 +87,25 @@ def handle_tools_command(
sorted_identifiers = sorted(identifiers)
workspace_name = "_".join(sorted_identifiers) + ".code-workspace"
directories_cfg = ctx.config_merged.get("directories") or {}
workspaces_dir = os.path.expanduser(
ctx.config_merged.get("directories").get("workspaces")
directories_cfg.get("workspaces", "~/Workspaces")
)
os.makedirs(workspaces_dir, exist_ok=True)
workspace_file = os.path.join(workspaces_dir, workspace_name)
folders = [{"path": repository["directory"]} for repository in selected]
folders = [
{"path": _resolve_repository_path(repository, ctx)}
for repository in selected
]
workspace_data = {
"folders": folders,
"settings": {},
}
if not os.path.exists(workspace_file):
with open(workspace_file, "w") as f:
with open(workspace_file, "w", encoding="utf-8") as f:
json.dump(workspace_data, f, indent=4)
print(f"Created workspace file: {workspace_file}")
else:

View File

@@ -6,8 +6,14 @@ from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.repository.dir import get_repo_dir
def create_ink(repo, repositories_base_dir, bin_dir, all_repos,
quiet=False, preview=False):
def create_ink(
repo,
repositories_base_dir,
bin_dir,
all_repos,
quiet: bool = False,
preview: bool = False,
) -> None:
"""
Create a symlink for the repository's command.
@@ -18,6 +24,11 @@ def create_ink(repo, repositories_base_dir, bin_dir, all_repos,
Behavior:
- If repo["command"] is defined → create a symlink to it.
- If repo["command"] is missing or None → do NOT create a link.
Safety:
- If the resolved command path is identical to the final link target,
we skip symlink creation to avoid self-referential symlinks that
would break shell resolution ("too many levels of symbolic links").
"""
repo_identifier = get_repo_identifier(repo, all_repos)
@@ -31,6 +42,27 @@ def create_ink(repo, repositories_base_dir, bin_dir, all_repos,
link_path = os.path.join(bin_dir, repo_identifier)
# ------------------------------------------------------------------
# Safety guard: avoid self-referential symlinks
#
# Example of a broken situation we must avoid:
# - command = ~/.local/bin/package-manager
# - link_path = ~/.local/bin/package-manager
# - create_ink() removes the real binary and creates a symlink
# pointing to itself → zsh: too many levels of symbolic links
#
# If the resolved command already lives exactly at the target path,
# we treat it as "already installed" and skip any modification.
# ------------------------------------------------------------------
if os.path.abspath(command) == os.path.abspath(link_path):
if not quiet:
print(
f"[pkgmgr] Command for '{repo_identifier}' already lives at "
f"'{link_path}'. Skipping symlink creation to avoid a "
"self-referential link."
)
return
if preview:
print(f"[Preview] Would link {link_path}{command}")
return
@@ -65,7 +97,10 @@ def create_ink(repo, repositories_base_dir, bin_dir, all_repos,
if alias_name == repo_identifier:
if not quiet:
print(f"Alias '{alias_name}' equals identifier. Skipping alias creation.")
print(
f"Alias '{alias_name}' equals identifier. "
"Skipping alias creation."
)
return
try:

View File

@@ -7,7 +7,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "package-manager"
version = "0.7.10"
version = "0.7.13"
description = "Kevin's package-manager tool (pkgmgr)"
readme = "README.md"
requires-python = ">=3.11"

View File

@@ -0,0 +1,93 @@
"""
Integration test: clone all configured repositories using
--clone-mode https and --no-verification.
This test is intended to be run inside the Docker container where:
- network access is available,
- the config/config.yaml is present,
- and it is safe to perform real git operations.
It passes if the command completes without raising an exception.
"""
import runpy
import sys
import unittest
from test_install_pkgmgr_shallow import (
nix_profile_list_debug,
remove_pkgmgr_from_nix_profile,
pkgmgr_help_debug,
)
class TestIntegrationCloneAllHttps(unittest.TestCase):
def _run_pkgmgr_clone_all_https(self) -> None:
"""
Helper that runs the CLI command via main.py and provides
extra diagnostics if the command exits with a non-zero code.
"""
cmd_repr = "pkgmgr clone --all --clone-mode https --no-verification"
original_argv = sys.argv
try:
sys.argv = [
"pkgmgr",
"clone",
"--all",
"--clone-mode",
"https",
"--no-verification",
]
try:
# Execute main.py as if it was called from CLI.
# This will run the full clone pipeline inside the container.
runpy.run_module("main", run_name="__main__")
except SystemExit as exc:
# Convert SystemExit into a more helpful assertion with debug output.
exit_code = exc.code if isinstance(exc.code, int) else str(exc.code)
print("\n[TEST] pkgmgr clone --all failed with SystemExit")
print(f"[TEST] Command : {cmd_repr}")
print(f"[TEST] Exit code: {exit_code}")
# Additional Nix profile debug on failure (may still be useful
# if the clone step interacts with Nix-based tooling).
nix_profile_list_debug("ON FAILURE (AFTER SystemExit)")
raise AssertionError(
f"{cmd_repr!r} failed with exit code {exit_code}. "
"Scroll up to see the full pkgmgr/make output inside the container."
) from exc
finally:
sys.argv = original_argv
def test_clone_all_repositories_https(self) -> None:
"""
Run: pkgmgr clone --all --clone-mode https --no-verification
This will perform real git clone operations inside the container.
The test succeeds if no exception is raised and `pkgmgr --help`
works in a fresh interactive bash session afterwards.
"""
# Debug before cleanup (reusing the same helpers as the install test).
nix_profile_list_debug("BEFORE CLEANUP")
# Cleanup: aggressively try to drop any pkgmgr/profile entries
# (harmless for a pure clone test but keeps environments comparable).
remove_pkgmgr_from_nix_profile()
# Debug after cleanup
nix_profile_list_debug("AFTER CLEANUP")
# Run the actual clone with extended diagnostics
self._run_pkgmgr_clone_all_https()
# After successful clone: show `pkgmgr --help`
# via interactive bash (same helper as in the install test).
pkgmgr_help_debug()
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,74 @@
"""
E2E/Integration tests for the tool-related subcommands' --help output.
We assert that calling:
- pkgmgr explore --help
- pkgmgr terminal --help
- pkgmgr code --help
completes successfully. For --help, argparse exits with SystemExit(0),
which we treat as success and suppress in the helper.
"""
from __future__ import annotations
import os
import runpy
import sys
import unittest
from typing import List
# Resolve project root (the repo where main.py lives, e.g. /src)
PROJECT_ROOT = os.path.abspath(
os.path.join(os.path.dirname(__file__), "..", "..")
)
MAIN_PATH = os.path.join(PROJECT_ROOT, "main.py")
def _run_main(argv: List[str]) -> None:
"""
Helper to run main.py with the given argv.
This mimics a "pkgmgr ..." invocation in the E2E container.
For --help invocations, argparse will call sys.exit(0), which raises
SystemExit(0). We treat this as success and only re-raise non-zero
exit codes.
"""
old_argv = sys.argv
try:
sys.argv = ["pkgmgr"] + argv
try:
runpy.run_path(MAIN_PATH, run_name="__main__")
except SystemExit as exc: # argparse uses this for --help
# SystemExit.code can be int, str or None; for our purposes:
code = exc.code
if code not in (0, None):
# Non-zero exit code -> real error.
raise
# For 0/None: treat as success and swallow the exception.
finally:
sys.argv = old_argv
class TestToolsHelp(unittest.TestCase):
"""
E2E/Integration tests for tool commands' --help screens.
"""
def test_explore_help(self) -> None:
"""Ensure `pkgmgr explore --help` runs successfully."""
_run_main(["explore", "--help"])
def test_terminal_help(self) -> None:
"""Ensure `pkgmgr terminal --help` runs successfully."""
_run_main(["terminal", "--help"])
def test_code_help(self) -> None:
"""Ensure `pkgmgr code --help` runs successfully."""
_run_main(["code", "--help"])
if __name__ == "__main__":
unittest.main()

View File

@@ -1,6 +1,6 @@
"""
Integration test: install all configured repositories using
--clone-mode shallow (HTTPS shallow clone) and --no-verification.
Integration test: update all configured repositories using
--clone-mode https and --no-verification.
This test is intended to be run inside the Docker container where:
- network access is available,
@@ -21,37 +21,38 @@ from test_install_pkgmgr_shallow import (
)
class TestIntegrationInstallAllShallow(unittest.TestCase):
def _run_pkgmgr_install_all(self) -> None:
class TestIntegrationUpdateAllHttps(unittest.TestCase):
def _run_pkgmgr_update_all_https(self) -> None:
"""
Helper that runs the CLI command via main.py and provides
extra diagnostics if the command exits with a non-zero code.
"""
cmd_repr = "pkgmgr install --all --clone-mode shallow --no-verification"
cmd_repr = "pkgmgr update --all --clone-mode https --no-verification"
original_argv = sys.argv
try:
sys.argv = [
"pkgmgr",
"install",
"update",
"--all",
"--clone-mode",
"shallow",
"https",
"--no-verification",
]
try:
# Execute main.py as if it was called from CLI.
# This will run the full install pipeline inside the container.
# This will run the full update pipeline inside the container.
runpy.run_module("main", run_name="__main__")
except SystemExit as exc:
# Convert SystemExit into a more helpful assertion with debug output.
exit_code = exc.code if isinstance(exc.code, int) else str(exc.code)
print("\n[TEST] pkgmgr install --all failed with SystemExit")
print("\n[TEST] pkgmgr update --all failed with SystemExit")
print(f"[TEST] Command : {cmd_repr}")
print(f"[TEST] Exit code: {exit_code}")
# Additional Nix profile debug on failure
# Additional Nix profile debug on failure (useful if any update
# step interacts with Nix-based tooling).
nix_profile_list_debug("ON FAILURE (AFTER SystemExit)")
raise AssertionError(
@@ -62,11 +63,11 @@ class TestIntegrationInstallAllShallow(unittest.TestCase):
finally:
sys.argv = original_argv
def test_install_all_repositories_shallow(self) -> None:
def test_update_all_repositories_https(self) -> None:
"""
Run: pkgmgr install --all --clone-mode shallow --no-verification
Run: pkgmgr update --all --clone-mode https --no-verification
This will perform real installations/clones inside the container.
This will perform real git update operations inside the container.
The test succeeds if no exception is raised and `pkgmgr --help`
works in a fresh interactive bash session afterwards.
"""
@@ -74,16 +75,17 @@ class TestIntegrationInstallAllShallow(unittest.TestCase):
nix_profile_list_debug("BEFORE CLEANUP")
# Cleanup: aggressively try to drop any pkgmgr/profile entries
# (keeps the environment comparable to other integration tests).
remove_pkgmgr_from_nix_profile()
# Debug after cleanup
nix_profile_list_debug("AFTER CLEANUP")
# Run the actual install with extended diagnostics
self._run_pkgmgr_install_all()
# Run the actual update with extended diagnostics
self._run_pkgmgr_update_all_https()
# After successful installation: show `pkgmgr --help`
# via interactive bash (same as the pkgmgr-only test).
# After successful update: show `pkgmgr --help`
# via interactive bash (same helper as in the other integration tests).
pkgmgr_help_debug()

View File

@@ -1,11 +1,10 @@
# tests/unit/pkgmgr/installers/os_packages/test_debian_control.py
import os
import unittest
from unittest.mock import patch
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.repository.install.installers.os_packages.debian_control import DebianControlInstaller
from pkgmgr.actions.repository.install.installers.os_packages.debian_control import (
DebianControlInstaller,
)
class TestDebianControlInstaller(unittest.TestCase):
@@ -29,14 +28,24 @@ class TestDebianControlInstaller(unittest.TestCase):
@patch("os.path.exists", return_value=True)
@patch("shutil.which", return_value="/usr/bin/dpkg-buildpackage")
def test_supports_true(self, mock_which, mock_exists):
"""
supports() should return True when dpkg-buildpackage is available
and a debian/control file exists in the repository.
"""
self.assertTrue(self.installer.supports(self.ctx))
@patch("os.path.exists", return_value=True)
@patch("shutil.which", return_value=None)
def test_supports_false_without_dpkg_buildpackage(self, mock_which, mock_exists):
"""
supports() should return False when dpkg-buildpackage is not available,
even if a debian/control file exists.
"""
self.assertFalse(self.installer.supports(self.ctx))
@patch("pkgmgr.actions.repository.install.installers.os_packages.debian_control.run_command")
@patch(
"pkgmgr.actions.repository.install.installers.os_packages.debian_control.run_command"
)
@patch("glob.glob", return_value=["/tmp/package-manager_0.1.1_all.deb"])
@patch("os.path.exists", return_value=True)
@patch("shutil.which")
@@ -47,7 +56,19 @@ class TestDebianControlInstaller(unittest.TestCase):
mock_glob,
mock_run_command,
):
# dpkg-buildpackage + apt-get vorhanden
"""
run() should:
1. Install build dependencies (apt-get build-dep).
2. Build the package using dpkg-buildpackage -b -us -uc.
3. Discover built .deb files via glob.
4. Install the resulting .deb packages using a suitable tool:
- dpkg -i
- sudo dpkg -i
- or sudo apt-get install -y
"""
# Simulate dpkg-buildpackage and apt-get being available.
def which_side_effect(name):
if name == "dpkg-buildpackage":
return "/usr/bin/dpkg-buildpackage"
@@ -64,16 +85,35 @@ class TestDebianControlInstaller(unittest.TestCase):
# 1) apt-get update
self.assertTrue(any("apt-get update" in cmd for cmd in cmds))
# 2) apt-get build-dep ./
self.assertTrue(any("apt-get build-dep -y ./ " in cmd or
"apt-get build-dep -y ./"
in cmd for cmd in cmds))
# 2) apt-get build-dep -y ./ (with or without trailing space)
self.assertTrue(
any(
"apt-get build-dep -y ./ " in cmd
or "apt-get build-dep -y ./"
in cmd
for cmd in cmds
)
)
# 3) dpkg-buildpackage -b -us -uc
self.assertTrue(any("dpkg-buildpackage -b -us -uc" in cmd for cmd in cmds))
# 4) dpkg -i ../*.deb
self.assertTrue(any(cmd.startswith("sudo dpkg -i ") for cmd in cmds))
# 4) final installation of .deb packages:
# accept dpkg -i, sudo dpkg -i, or sudo apt-get install -y
has_plain_dpkg_install = any(cmd.startswith("dpkg -i ") for cmd in cmds)
has_sudo_dpkg_install = any(cmd.startswith("sudo dpkg -i ") for cmd in cmds)
has_apt_install = any(
cmd.startswith("sudo apt-get install -y ") for cmd in cmds
)
self.assertTrue(
has_plain_dpkg_install or has_sudo_dpkg_install or has_apt_install,
msg=(
"Expected one of 'dpkg -i', 'sudo dpkg -i' or "
"'sudo apt-get install -y', but got commands: "
f"{cmds}"
),
)
if __name__ == "__main__":

View File

@@ -1,10 +1,10 @@
# tests/unit/pkgmgr/installers/os_packages/test_rpm_spec.py
import unittest
from unittest.mock import patch
from pkgmgr.actions.repository.install.context import RepoContext
from pkgmgr.actions.repository.install.installers.os_packages.rpm_spec import RpmSpecInstaller
from pkgmgr.actions.repository.install.installers.os_packages.rpm_spec import (
RpmSpecInstaller,
)
class TestRpmSpecInstaller(unittest.TestCase):
@@ -28,6 +28,13 @@ class TestRpmSpecInstaller(unittest.TestCase):
@patch("glob.glob", return_value=["/tmp/repo/test.spec"])
@patch("shutil.which")
def test_supports_true(self, mock_which, mock_glob):
"""
supports() should return True when:
- rpmbuild is available, and
- at least one of dnf/yum/yum-builddep is available, and
- a *.spec file is present in the repo.
"""
def which_side_effect(name):
if name == "rpmbuild":
return "/usr/bin/rpmbuild"
@@ -42,9 +49,14 @@ class TestRpmSpecInstaller(unittest.TestCase):
@patch("glob.glob", return_value=[])
@patch("shutil.which")
def test_supports_false_missing_spec(self, mock_which, mock_glob):
"""
supports() should return False if no *.spec file is found,
even if rpmbuild is present.
"""
mock_which.return_value = "/usr/bin/rpmbuild"
self.assertFalse(self.installer.supports(self.ctx))
@patch.object(RpmSpecInstaller, "_prepare_source_tarball")
@patch("pkgmgr.actions.repository.install.installers.os_packages.rpm_spec.run_command")
@patch("glob.glob")
@patch("shutil.which")
@@ -53,8 +65,20 @@ class TestRpmSpecInstaller(unittest.TestCase):
mock_which,
mock_glob,
mock_run_command,
mock_prepare_source_tarball,
):
# glob.glob wird zweimal benutzt: einmal für *.spec, einmal für gebaute RPMs
"""
run() should:
1. Determine the .spec file in the repo.
2. Call _prepare_source_tarball() once with ctx and spec path.
3. Install build dependencies via dnf/yum-builddep/yum.
4. Call rpmbuild -ba <spec>.
5. Find built RPMs via glob.
6. Install built RPMs via dnf/yum/rpm (here: dnf).
"""
# glob.glob is used twice: once for *.spec, once for built RPMs.
def glob_side_effect(pattern, recursive=False):
if pattern.endswith("*.spec"):
return ["/tmp/repo/package-manager.spec"]
@@ -77,16 +101,23 @@ class TestRpmSpecInstaller(unittest.TestCase):
self.installer.run(self.ctx)
# _prepare_source_tarball must have been called with the resolved spec path.
mock_prepare_source_tarball.assert_called_once_with(
self.ctx,
"/tmp/repo/package-manager.spec",
)
# Collect all command strings passed to run_command.
cmds = [c[0][0] for c in mock_run_command.call_args_list]
# 1) builddep
# 1) build dependencies (dnf builddep)
self.assertTrue(any("builddep -y" in cmd for cmd in cmds))
# 2) rpmbuild -ba
# 2) rpmbuild -ba <spec>
self.assertTrue(any(cmd.startswith("rpmbuild -ba ") for cmd in cmds))
# 3) rpm -i …
self.assertTrue(any(cmd.startswith("sudo rpm -i ") for cmd in cmds))
# 3) installation via dnf: "sudo dnf install -y <rpms>"
self.assertTrue(any(cmd.startswith("sudo dnf install -y ") for cmd in cmds))
if __name__ == "__main__":

View File

@@ -28,14 +28,27 @@ class TestNixFlakeInstaller(unittest.TestCase):
@patch("shutil.which", return_value="/usr/bin/nix")
@patch("os.path.exists", return_value=True)
def test_supports_true_when_nix_and_flake_exist(self, mock_exists, mock_which):
self.assertTrue(self.installer.supports(self.ctx))
"""
supports() should return True when:
- nix is available,
- flake.nix exists in the repo,
- and we are not inside a Nix dev shell.
"""
with patch.dict(os.environ, {"IN_NIX_SHELL": ""}, clear=False):
self.assertTrue(self.installer.supports(self.ctx))
mock_which.assert_called_with("nix")
mock_exists.assert_called_with(os.path.join(self.ctx.repo_dir, "flake.nix"))
@patch("shutil.which", return_value=None)
@patch("os.path.exists", return_value=True)
def test_supports_false_when_nix_missing(self, mock_exists, mock_which):
self.assertFalse(self.installer.supports(self.ctx))
"""
supports() should return False if nix is not available,
even if a flake.nix file exists.
"""
with patch.dict(os.environ, {"IN_NIX_SHELL": ""}, clear=False):
self.assertFalse(self.installer.supports(self.ctx))
@patch("os.path.exists", return_value=True)
@patch("shutil.which", return_value="/usr/bin/nix")
@@ -47,10 +60,12 @@ class TestNixFlakeInstaller(unittest.TestCase):
mock_exists,
):
"""
Ensure that run():
- first tries to remove the old 'package-manager' profile entry
- then installs both 'pkgmgr' and 'default' outputs.
run() should:
1. attempt to remove the old 'package-manager' profile entry, and
2. install both 'pkgmgr' and 'default' flake outputs.
"""
cmds = []
def side_effect(cmd, cwd=None, preview=False, *args, **kwargs):
@@ -59,18 +74,24 @@ class TestNixFlakeInstaller(unittest.TestCase):
mock_run_command.side_effect = side_effect
self.installer.run(self.ctx)
# Simulate a normal environment (not inside nix develop, installer enabled).
with patch.dict(
os.environ,
{"IN_NIX_SHELL": "", "PKGMGR_DISABLE_NIX_FLAKE_INSTALLER": ""},
clear=False,
):
self.installer.run(self.ctx)
remove_cmd = f"nix profile remove {self.installer.PROFILE_NAME} || true"
install_pkgmgr_cmd = f"nix profile install {self.ctx.repo_dir}#pkgmgr"
install_default_cmd = f"nix profile install {self.ctx.repo_dir}#default"
# Mindestens diese drei Kommandos müssen aufgerufen worden sein
# At least these three commands must have been issued.
self.assertIn(remove_cmd, cmds)
self.assertIn(install_pkgmgr_cmd, cmds)
self.assertIn(install_default_cmd, cmds)
# Optional: sicherstellen, dass der remove-Aufruf zuerst kam
# Optional: ensure the remove call came first.
self.assertEqual(cmds[0], remove_cmd)
@patch("shutil.which", return_value="/usr/bin/nix")
@@ -90,8 +111,13 @@ class TestNixFlakeInstaller(unittest.TestCase):
mock_run_command.side_effect = side_effect
# Should not raise, SystemExit is swallowed internally.
self.installer._ensure_old_profile_removed(self.ctx)
with patch.dict(
os.environ,
{"IN_NIX_SHELL": "", "PKGMGR_DISABLE_NIX_FLAKE_INSTALLER": ""},
clear=False,
):
# Should not raise, SystemExit is swallowed internally.
self.installer._ensure_old_profile_removed(self.ctx)
remove_cmd = f"nix profile remove {self.installer.PROFILE_NAME} || true"
mock_run_command.assert_called_with(

View File

@@ -1,5 +1,3 @@
# tests/unit/pkgmgr/installers/test_python_installer.py
import os
import unittest
from unittest.mock import patch
@@ -28,18 +26,40 @@ class TestPythonInstaller(unittest.TestCase):
@patch("os.path.exists", side_effect=lambda path: path.endswith("pyproject.toml"))
def test_supports_true_when_pyproject_exists(self, mock_exists):
self.assertTrue(self.installer.supports(self.ctx))
"""
supports() should return True when a pyproject.toml exists in the repo
and we are not inside a Nix dev shell.
"""
with patch.dict(os.environ, {"IN_NIX_SHELL": ""}, clear=False):
self.assertTrue(self.installer.supports(self.ctx))
@patch("os.path.exists", return_value=False)
def test_supports_false_when_no_pyproject(self, mock_exists):
self.assertFalse(self.installer.supports(self.ctx))
"""
supports() should return False when no pyproject.toml exists.
"""
with patch.dict(os.environ, {"IN_NIX_SHELL": ""}, clear=False):
self.assertFalse(self.installer.supports(self.ctx))
@patch("pkgmgr.actions.repository.install.installers.python.run_command")
@patch("os.path.exists", side_effect=lambda path: path.endswith("pyproject.toml"))
def test_run_installs_project_from_pyproject(self, mock_exists, mock_run_command):
self.installer.run(self.ctx)
"""
run() should invoke pip to install the project from pyproject.toml
when we are not inside a Nix dev shell.
"""
# Simulate a normal environment (not inside nix develop).
with patch.dict(os.environ, {"IN_NIX_SHELL": ""}, clear=False):
self.installer.run(self.ctx)
# Ensure run_command was actually called.
mock_run_command.assert_called()
# Extract the command string.
cmd = mock_run_command.call_args[0][0]
self.assertIn("pip install .", cmd)
# Ensure the working directory is the repo dir.
self.assertEqual(
mock_run_command.call_args[1].get("cwd"),
self.ctx.repo_dir,

View File

@@ -0,0 +1,168 @@
from __future__ import annotations
import json
import os
import tempfile
import unittest
from types import SimpleNamespace
from typing import Any, Dict, List
from pkgmgr.cli.commands.tools import handle_tools_command
Repository = Dict[str, Any]
class _Args:
"""
Simple helper object to mimic argparse.Namespace for handle_tools_command.
"""
def __init__(self, command: str) -> None:
self.command = command
class TestHandleToolsCommand(unittest.TestCase):
"""
Unit tests for pkgmgr.cli.commands.tools.handle_tools_command.
We focus on:
- Correct path resolution for repositories that have a 'directory' key.
- Correct shell commands for 'explore' and 'terminal'.
- Proper workspace creation and invocation of 'code' for the 'code' command.
"""
def setUp(self) -> None:
# Two fake repositories with explicit 'directory' entries so that
# _resolve_repository_path() does not need to call get_repo_dir().
self.repos: List[Repository] = [
{"alias": "repo1", "directory": "/tmp/repo1"},
{"alias": "repo2", "directory": "/tmp/repo2"},
]
# Minimal CLI context; only attributes used in tools.py are provided.
self.ctx = SimpleNamespace(
config_merged={"directories": {"workspaces": "~/Workspaces"}},
all_repositories=self.repos,
repositories_base_dir="/base/dir",
)
# ------------------------------------------------------------------ #
# Helper
# ------------------------------------------------------------------ #
def _patch_run_command(self):
"""
Convenience context manager for patching run_command in tools module.
"""
from unittest.mock import patch
return patch("pkgmgr.cli.commands.tools.run_command")
# ------------------------------------------------------------------ #
# Tests for 'explore'
# ------------------------------------------------------------------ #
def test_explore_uses_directory_paths(self) -> None:
"""
The 'explore' command should call Nautilus with the resolved
repository paths and use '& disown' as in the implementation.
"""
from unittest.mock import call
args = _Args(command="explore")
with self._patch_run_command() as mock_run_command:
handle_tools_command(args, self.ctx, self.repos)
expected_calls = [
call('nautilus "/tmp/repo1" & disown'),
call('nautilus "/tmp/repo2" & disown'),
]
self.assertEqual(mock_run_command.call_args_list, expected_calls)
# ------------------------------------------------------------------ #
# Tests for 'terminal'
# ------------------------------------------------------------------ #
def test_terminal_uses_directory_paths(self) -> None:
"""
The 'terminal' command should open a GNOME Terminal tab with the
repository as its working directory.
"""
from unittest.mock import call
args = _Args(command="terminal")
with self._patch_run_command() as mock_run_command:
handle_tools_command(args, self.ctx, self.repos)
expected_calls = [
call('gnome-terminal --tab --working-directory="/tmp/repo1"'),
call('gnome-terminal --tab --working-directory="/tmp/repo2"'),
]
self.assertEqual(mock_run_command.call_args_list, expected_calls)
# ------------------------------------------------------------------ #
# Tests for 'code'
# ------------------------------------------------------------------ #
def test_code_creates_workspace_and_calls_code(self) -> None:
"""
The 'code' command should:
- Build a workspace file name from sorted repository identifiers.
- Resolve the repository paths into VS Code 'folders'.
- Create the workspace file if it does not exist.
- Call 'code "<workspace_file>"' via run_command.
"""
from unittest.mock import patch
args = _Args(command="code")
with tempfile.TemporaryDirectory() as tmpdir:
# Patch expanduser so that the configured '~/Workspaces'
# resolves into our temporary directory.
with patch(
"pkgmgr.cli.commands.tools.os.path.expanduser"
) as mock_expanduser:
mock_expanduser.return_value = tmpdir
# Patch get_repo_identifier so the resulting workspace file
# name is deterministic and easy to assert.
with patch(
"pkgmgr.cli.commands.tools.get_repo_identifier"
) as mock_get_identifier:
mock_get_identifier.side_effect = ["repo-b", "repo-a"]
with self._patch_run_command() as mock_run_command:
handle_tools_command(args, self.ctx, self.repos)
# The identifiers are ['repo-b', 'repo-a'], which are
# sorted to ['repo-a', 'repo-b'] and joined with '_'.
expected_workspace_name = "repo-a_repo-b.code-workspace"
expected_workspace_file = os.path.join(
tmpdir, expected_workspace_name
)
# Workspace file should have been created.
self.assertTrue(
os.path.exists(expected_workspace_file),
"Workspace file was not created.",
)
# The content of the workspace must be valid JSON with
# the expected folder paths.
with open(expected_workspace_file, "r", encoding="utf-8") as f:
data = json.load(f)
self.assertIn("folders", data)
folder_paths = {f["path"] for f in data["folders"]}
self.assertEqual(
folder_paths,
{"/tmp/repo1", "/tmp/repo2"},
)
# And VS Code must have been invoked with that workspace.
mock_run_command.assert_called_once_with(
f'code "{expected_workspace_file}"'
)

View File

@@ -0,0 +1,108 @@
import os
import tempfile
import unittest
from unittest.mock import patch
from pkgmgr.core.command.ink import create_ink
class TestCreateInk(unittest.TestCase):
@patch("pkgmgr.core.command.ink.get_repo_dir")
@patch("pkgmgr.core.command.ink.get_repo_identifier")
def test_self_referential_command_skips_symlink(
self,
mock_get_repo_identifier,
mock_get_repo_dir,
):
"""
If the resolved command path is identical to the final link target,
create_ink() must NOT replace it with a self-referential symlink.
This simulates the situation where the command already lives at
~/.local/bin/<identifier> and we would otherwise create a symlink
pointing to itself.
"""
mock_get_repo_identifier.return_value = "package-manager"
mock_get_repo_dir.return_value = "/fake/repo"
with tempfile.TemporaryDirectory() as bin_dir:
# Simulate an existing real binary at the final link location.
command_path = os.path.join(bin_dir, "package-manager")
with open(command_path, "w", encoding="utf-8") as f:
f.write("#!/bin/sh\necho package-manager\n")
# Sanity check: not a symlink yet.
self.assertTrue(os.path.exists(command_path))
self.assertFalse(os.path.islink(command_path))
repo = {"command": command_path}
# This must NOT turn the file into a self-referential symlink.
create_ink(
repo=repo,
repositories_base_dir="/fake/base",
bin_dir=bin_dir,
all_repos=[],
quiet=True,
preview=False,
)
# After create_ink(), the file must still exist and must not be a symlink.
self.assertTrue(os.path.exists(command_path))
self.assertFalse(
os.path.islink(command_path),
"create_ink() must not create a self-referential symlink "
"when command == link_path",
)
@patch("pkgmgr.core.command.ink.get_repo_dir")
@patch("pkgmgr.core.command.ink.get_repo_identifier")
def test_create_symlink_for_normal_command(
self,
mock_get_repo_identifier,
mock_get_repo_dir,
):
"""
In the normal case (command path != link target), create_ink()
must create a symlink in bin_dir pointing to the given command,
and optionally an alias symlink when repo['alias'] is set.
"""
mock_get_repo_identifier.return_value = "mytool"
with tempfile.TemporaryDirectory() as repo_dir, tempfile.TemporaryDirectory() as bin_dir:
mock_get_repo_dir.return_value = repo_dir
# Create a fake executable inside the repository.
command_path = os.path.join(repo_dir, "main.sh")
with open(command_path, "w", encoding="utf-8") as f:
f.write("#!/bin/sh\necho mytool\n")
os.chmod(command_path, 0o755)
repo = {
"command": command_path,
"alias": "mt",
}
create_ink(
repo=repo,
repositories_base_dir="/fake/base",
bin_dir=bin_dir,
all_repos=[],
quiet=True,
preview=False,
)
link_path = os.path.join(bin_dir, "mytool")
alias_path = os.path.join(bin_dir, "mt")
# Main link must exist and point to the command.
self.assertTrue(os.path.islink(link_path))
self.assertEqual(os.readlink(link_path), command_path)
# Alias must exist and point to the main link.
self.assertTrue(os.path.islink(alias_path))
self.assertEqual(os.readlink(alias_path), link_path)
if __name__ == "__main__":
unittest.main()