Trust Assessment
capacities received a trust score of 88/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via User Input in Curl Commands.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via User Input in Curl Commands The skill documentation provides `curl` command examples that include placeholders for user-controlled input such as `mdText`, `url`, and `searchTerm`. If an LLM directly interpolates untrusted user input into these fields when constructing and executing these `curl` commands, without proper shell escaping or JSON escaping, it could lead to command injection. An attacker could craft malicious input (e.g., `"foo"; rm -rf /; echo "bar"`) to break out of the JSON string or shell context and execute arbitrary commands on the host system where the `curl` command is run. When generating and executing shell commands based on user input, ensure all user-provided values are strictly validated and properly escaped for both the JSON context and the shell context. For JSON payloads, use a robust JSON serialization library that handles escaping. For shell execution, use parameterized commands or a library that automatically escapes arguments, rather than direct string concatenation. | LLM | SKILL.md:20 |
Scan History
Embed Code
[](https://skillshield.io/report/de5f97db7e950659)
Powered by SkillShield