Security Audit
shinpr/sub-agents-skills:skills/sub-agents
github.com/shinpr/sub-agents-skillsTrust Assessment
shinpr/sub-agents-skills:skills/sub-agents received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 5 findings: 2 critical, 3 high, 0 medium, and 0 low severity. Key findings include Arbitrary command execution, Dangerous call: subprocess.Popen(), Path Traversal and Data Exfiltration via Agent Name.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Static Code Analysis layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 26, 2026 (commit e91b1bbb). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/sub-agents/scripts/run_subagent.py:324 | |
| CRITICAL | Path Traversal and Data Exfiltration via Agent Name The script constructs file paths for agent definitions using a user-controlled `agent_name` parameter without sufficient sanitization. An attacker can use path traversal sequences (e.g., `../../../../etc/passwd`) in the `--agent` argument to read arbitrary files on the system. The content of these files is then loaded into memory as `system_context` and passed to an external sub-agent, creating a clear vector for data exfiltration. Implement strict validation for the `agent_name` parameter to prevent path traversal. Ensure `agent_name` only contains alphanumeric characters, hyphens, and underscores, and does not include any directory separators or '..' sequences. Alternatively, use `os.path.basename()` to extract only the filename part before constructing the path. | Static | scripts/run_subagent.py:60 | |
| HIGH | Dangerous call: subprocess.Popen() Call to 'subprocess.Popen()' detected in function 'execute_agent'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/sub-agents/scripts/run_subagent.py:324 | |
| HIGH | Excessive Permissions Declared vs. Required The skill's manifest declares `allowed-tools: Bash Read`, indicating it only needs to read bash output. However, the `scripts/run_subagent.py` script uses `subprocess.run` to execute external CLI programs, which requires `Bash Execute` permissions. This discrepancy misrepresents the skill's actual capabilities and could lead to a privilege escalation if the system grants 'Bash Read' but the skill attempts 'Bash Execute'. Update the `allowed-tools` field in the skill's manifest to accurately reflect the required permissions, which should be `Bash Execute` given the use of `subprocess.run`. | Static | SKILL.md:1 | |
| HIGH | Second-Order Command/Prompt Injection via Sub-Agent Arguments User-controlled input, specifically the `--prompt` argument and the `system_context` (which can be arbitrary file content due to the path traversal vulnerability), is passed directly as arguments to external CLI programs via `subprocess.run`. While `subprocess.run` with a list of arguments is generally safe from direct shell injection, if the external 'CLI AI' itself interprets its `--prompt` or `--system` arguments as executable code, or allows arbitrary command execution within its own context, this skill becomes a vector for second-order command or prompt injection. An attacker could craft a malicious prompt or inject malicious content into a system file (via path traversal) that the sub-agent then executes. Implement strict input validation and sanitization for `prompt` and `system_context` before passing them to external CLIs. This may involve escaping special characters relevant to the target CLIs or using a sandboxed environment for sub-agent execution. Additionally, ensure that the external CLIs themselves are robust against command/prompt injection through their arguments. | LLM | scripts/run_subagent.py:290 |
Scan History
Embed Code
[](https://skillshield.io/report/fea7aa3d26308fae)
Powered by SkillShield