Trust Assessment
testing received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via Unsanitized Placeholders in Shell Commands.
The analysis covered 4 layers: manifest_analysis, llm_behavioral_safety, dependency_graph, static_code_analysis. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 11, 2026 (commit 823aa29c). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via Unsanitized Placeholders in Shell Commands The skill provides shell commands (`bunx vitest run`) that include placeholders (`[file-path]`, `[file]`). If the host LLM or its execution environment substitutes user-provided input directly into these placeholders without proper sanitization, an attacker could inject arbitrary shell commands. This could lead to remote code execution, data deletion, or other severe system compromises on the environment where the commands are executed. The host LLM's execution environment must rigorously sanitize or validate any user-provided input before substituting it into command placeholders like `[file-path]` or `[file]`. For example, only allow specific file patterns, escape shell metacharacters, or use a safer execution mechanism that doesn't directly concatenate strings into shell commands. The skill author could also add a note advising caution when using these placeholders with untrusted input. | Unknown | SKILL.md:8 |
Scan History
Embed Code
[](https://skillshield.io/report/2ed7ca222a00c922)
Powered by SkillShield