Security Audit
Luispitik/sinapsis-3.2:skills/review-army
github.com/Luispitik/sinapsis-3.2Trust Assessment
Luispitik/sinapsis-3.2:skills/review-army received a trust score of 50/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 2 critical, 3 high, 0 medium, and 0 low severity. Key findings include Dangerous tool allowed: Bash, LLM interprets untrusted CLAUDE.md content as instructions, LLM processes untrusted `git diff` content without sanitization.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 10/100, indicating areas for improvement.
Last analyzed on April 9, 2026 (commit f405238d). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | LLM interprets untrusted CLAUDE.md content as instructions The skill explicitly reads the `CLAUDE.md` file and uses its content to influence the LLM's decision-making and specialist dispatch logic (e.g., "If CLAUDE.md says 'Vite' or 'React SPA' → skip nextjs specialist"). An attacker who can control the `CLAUDE.md` file (e.g., by submitting a malicious Pull Request) could inject instructions to manipulate the host LLM, leading to arbitrary actions or misdirection of the review process. Implement strict sanitization and validation for the content of `CLAUDE.md` before feeding it to the LLM. Avoid directly interpreting arbitrary text from untrusted sources as instructions. Consider using structured data (e.g., JSON) with a strict schema for configuration files, and parse it programmatically rather than relying on LLM interpretation. | LLM | SKILL.md:40 | |
| CRITICAL | LLM processes untrusted `git diff` content without sanitization The skill reads the full `git diff` output and uses it for "Scope classification" and "Critical pass" checks. If the `git diff` contains specially crafted text (e.g., in filenames, commit messages, or code comments) that acts as a prompt injection, an attacker could manipulate the LLM's behavior. This is a common vector for code review agents that process arbitrary code from untrusted sources. Implement robust sanitization and validation for all parts of the `git diff` content before it is processed by the LLM. Treat all content within the diff as untrusted user input and ensure it cannot be interpreted as instructions or commands. | LLM | SKILL.md:30 | |
| HIGH | Dangerous tool allowed: Bash The skill allows the 'Bash' tool without constraints. This grants arbitrary command execution. Remove unconstrained shell/exec tools from allowed-tools, or add specific command constraints. | Static | skills/review-army/SKILL.md:1 | |
| HIGH | Broad tool permissions increase attack surface The skill declares a very broad set of permissions, including `Bash`, `Edit`, `Write`, `Glob`, and `Agent`. While some of these might be necessary for a code review and auto-fix skill, they significantly increase the potential impact of a successful prompt injection or other compromise. `Bash` allows arbitrary command execution, `Edit`/`Write` allow arbitrary file modification, and `Agent` allows spawning sub-agents that inherit these same broad permissions, amplifying the risk. Review and restrict permissions to the absolute minimum required for the skill's functionality. Implement fine-grained access control where possible. For operations like `Edit`/`Write`, ensure they are highly constrained and validated, especially when triggered by LLM output. Consider sandboxing `Bash` execution. | LLM | SKILL.md:1 | |
| HIGH | Risk of arbitrary command execution and data exfiltration due to combined vulnerabilities The combination of critical prompt injection vulnerabilities (via `CLAUDE.md` and `git diff`) with the broad `Bash`, `Edit`, and `Write` permissions creates a high risk of arbitrary command injection and data exfiltration. A compromised LLM could be instructed to execute arbitrary shell commands (e.g., `cat .env`, `rm -rf /`), modify files with malicious code (e.g., via the "Fix-First workflow"), or exfiltrate sensitive data from the repository. The skill's own "Critical pass" section highlights "User input in shell commands without escaping" as a risk, indicating awareness but also potential for self-vulnerability. Address the underlying prompt injection vulnerabilities. Implement strict input validation and sanitization for all data derived from untrusted sources before it is used in `Bash` commands or file modification operations. Ensure that "fixes" are generated and applied within a secure, sandboxed environment or with strict content validation and user approval. | LLM | SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/401e18195c3c1a6f)
Powered by SkillShield