Trust Assessment
requesting-code-review received a trust score of 83/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Potential Data Exfiltration via Subagent Prompt, Potential Prompt Injection Vector for Subagent.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 11, 2026 (commit 6d52fe32). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Data Exfiltration via Subagent Prompt The skill instructs the user to gather local repository data, including git SHAs (`BASE_SHA`, `HEAD_SHA`), and potentially sensitive project details (`WHAT_WAS_IMPLEMENTED`, `PLAN_OR_REQUIREMENTS`, which can reference local files like `docs/plans/deployment-plan.md`). This data is then used to construct a prompt for a subagent. If the subagent is not properly sandboxed, its communication channel is insecure, or its logs are accessible, this process could lead to the exfiltration of sensitive local project information. 1. **Sanitize/Filter Input:** Advise users to carefully review and sanitize any sensitive information before including it in prompts for subagents, especially if the subagent's trust level or data handling policies are unknown. 2. **Subagent Sandboxing:** Recommend running subagents in a secure, isolated environment with restricted network access and ephemeral storage. 3. **Secure Communication:** Ensure that the communication channel with the subagent is encrypted and authenticated. 4. **Logging Policies:** Implement strict logging policies for subagents, ensuring that sensitive prompt data is not logged or is purged quickly. 5. **Explicit Warnings:** Add explicit warnings in the skill about the potential for data leakage when sending local project details to external or untrusted subagents. | LLM | SKILL.md:24 | |
| MEDIUM | Potential Prompt Injection Vector for Subagent The skill instructs the user to fill a template (`code-reviewer.md`) with user-provided content for placeholders like `{WHAT_WAS_IMPLEMENTED}` and `{PLAN_OR_REQUIREMENTS}`. This combined content is then used as a prompt for a subagent. If a malicious user provides specially crafted input for these placeholders, they could inject instructions into the subagent, potentially manipulating its behavior, causing it to perform unintended actions, or extract information. 1. **Input Validation/Sanitization:** Implement robust input validation and sanitization for all user-provided content that will be incorporated into subagent prompts. 2. **Clear Delimiters:** Use clear and unambiguous delimiters (e.g., XML tags, JSON structures) to separate user input from system instructions within the `code-reviewer.md` template. 3. **Principle of Least Privilege:** Ensure the subagent operates with the minimum necessary permissions and access to tools or external resources. 4. **Contextual Awareness:** Design the subagent to be context-aware and resistant to out-of-scope instructions. 5. **User Education:** Educate users about the risks of prompt injection and advise against including untrusted or malicious content in the placeholders. | LLM | SKILL.md:24 |
Scan History
Embed Code
[](https://skillshield.io/report/232de167fba07206)
Powered by SkillShield