Trust Assessment
fork-manager received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Command Injection via unsanitized config.json values in shell commands, Excessive filesystem permissions due to attacker-controlled localPath, Potential Prompt Injection via re-ingestion of unsanitized history.md.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via unsanitized config.json values in shell commands The skill instructs the agent to construct and execute shell commands using values directly loaded from `config.json` (e.g., `localPath`, `upstreamRemote`, `mainBranch`, branch names from `prBranches` and `localPatches`). If an attacker can modify the `config.json` file or control the `<repo-name>` used to locate it, they can inject arbitrary shell commands. For example, setting `localPath` to `/tmp; rm -rf / --no-preserve-root` would execute `rm -rf /` when `cd <localPath>` is run. All variables sourced from `config.json` or other untrusted input (like GitHub API responses for PR titles/descriptions/branch names) must be properly sanitized or quoted when used in shell commands. For paths and branch names, consider using `printf %q` or similar shell-specific quoting mechanisms, or validate against expected patterns. For `gh` commands, ensure parameters are passed as distinct arguments rather than interpolated into a single string, and rely on `gh`'s internal sanitization. | LLM | SKILL.md:180 | |
| HIGH | Excessive filesystem permissions due to attacker-controlled localPath The skill instructs the agent to perform Git operations (e.g., `git stash`, `git branch -D`, `git push --force`) within a directory specified by `localPath` in `config.json`. If an attacker can manipulate `config.json` to set `localPath` to a sensitive system directory (e.g., `/` or `/etc`), the agent could be coerced into performing destructive or unauthorized operations on arbitrary parts of the filesystem, leading to data loss, system compromise, or privilege escalation. This is a direct consequence of the command injection vulnerability. Implement strict validation and sanitization for `localPath` to ensure it points only to intended, isolated repository directories. Prevent path traversal (`../`) and ensure the path is within a designated safe workspace. Combine with proper shell quoting as described in the command injection remediation. | LLM | SKILL.md:180 | |
| MEDIUM | Potential Prompt Injection via re-ingestion of unsanitized history.md The skill instructs the agent to log a "Full Report" (which includes the complete output shown to the user) into `history.md`. This `history.md` is later read by the agent for context ("Ler último output antes de começar"). If the "Full Report" contains attacker-controlled strings (e.g., malicious PR titles, commit messages, or crafted text that mimics agent instructions), and this content is fed back into the LLM's prompt without sanitization, it could lead to prompt injection, allowing an attacker to manipulate the LLM's subsequent behavior. When re-ingesting `history.md` or any other agent-generated content that might contain user-controlled data, ensure that the content is properly sanitized or clearly delimited within the LLM's prompt to prevent it from being interpreted as instructions. Consider using specific prompt engineering techniques (e.g., XML tags, JSON structures) to separate data from instructions. | LLM | SKILL.md:109 |
Scan History
Embed Code
[](https://skillshield.io/report/e4de34c790b99e5c)
Powered by SkillShield