Trust Assessment
self-validating-example received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Command Injection via $OUTPUT in post_tool_use hooks.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via $OUTPUT in post_tool_use hooks The `post_tool_use` hooks directly embed the `$OUTPUT` variable into shell commands without apparent sanitization. `$OUTPUT` is generated by the LLM and is considered untrusted input. A malicious LLM response could inject arbitrary shell commands by crafting `$OUTPUT` to contain metacharacters (e.g., `;`, `&`, `|`, `$(...)`) or by exploiting arguments of the invoked commands (e.g., `npm test -- --testPathPattern="malicious.js; rm -rf /"`). This allows for arbitrary code execution on the host system. Implement robust sanitization or validation of the `$OUTPUT` variable before it is used in shell commands. For file paths, ensure they are canonicalized, validated against allowed patterns, and restricted to expected directories. Consider using a safer execution environment that does not directly interpret shell metacharacters from untrusted input, or pass `$OUTPUT` as a distinct argument to the command if the command supports it, rather than embedding it directly into the command string. | LLM | SKILL.md:10 |
Scan History
Embed Code
[](https://skillshield.io/report/8c3d8ce81715eb21)
Powered by SkillShield