Trust Assessment
github-pr received a trust score of 20/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 2 critical, 2 high, 0 medium, and 0 low severity. Key findings include Arbitrary command execution, Dangerous call: subprocess.run(), Command Injection via crafted Git branch names.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/dbhurley/github-pr/scripts/github-pr.py:27 | |
| CRITICAL | Execution of untrusted package manager scripts from PR The `test` command is designed to fetch, merge, install dependencies, and run build/test scripts from a GitHub Pull Request. This inherently involves executing code from an untrusted source (the PR author). If a PR contains malicious scripts in `package.json` (or equivalent for pnpm, yarn, bun), these scripts will be executed during the `install`, `build`, or `test` phases, leading to arbitrary code execution on the host system. This is a direct supply chain risk, as the skill explicitly instructs the execution of these untrusted scripts. Users should be explicitly warned about the inherent security risks of running `github-pr test` on untrusted PRs. Consider adding a `--confirm` flag or a clear interactive prompt before executing these commands. For automated environments, consider running such tests in isolated, sandboxed environments (e.g., Docker containers, VMs) with restricted network access and permissions. | LLM | scripts/github-pr.py:189 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'run'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/dbhurley/github-pr/scripts/github-pr.py:27 | |
| HIGH | Command Injection via crafted Git branch names The skill constructs `git fetch` and `git merge` commands using user-provided `branch` names (or a default `pr/<number>`). While `subprocess.run` uses `shell=False`, a specially crafted branch name containing shell metacharacters or `git` specific escape sequences could potentially lead to command injection if `git` itself misinterprets the argument. For example, a branch name like `'; rm -rf /'` could be problematic if `git` doesn't strictly validate refspec components or branch names. Implement strict validation and sanitization of user-provided `branch` names to ensure they conform to `git`'s ref-name format (e.g., using `git check-ref-format` or a robust regex). Alternatively, consider using `git`'s plumbing commands or a library that provides safer `git` interactions. | LLM | scripts/github-pr.py:140 |
Scan History
Embed Code
[](https://skillshield.io/report/1ad6217ddf8ed17d)
Powered by SkillShield