Trust Assessment
books received a trust score of 90/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via User Input to 'books' CLI.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via User Input to 'books' CLI The `SKILL.md` defines the command-line interface for the `books` skill, showing how user-provided input (e.g., search queries, work IDs, author IDs) is passed as arguments to the `books` executable. The skill's manifest explicitly lists `bash` as a required binary, strongly indicating that the `books` executable is a shell script. In shell scripts, directly embedding unsanitized user input into commands is a common source of command injection vulnerabilities. An attacker could craft malicious input (e.g., `"; rm -rf /"`) that, if not properly quoted or escaped by the `books` script, would execute arbitrary shell commands on the host system. While the source code of the `books` script is not provided, the described interface combined with the `bash` dependency presents a credible and high-risk command injection vector. The `books` script (not provided in this context) must rigorously sanitize and properly quote all user-provided arguments before executing them in a shell context. For example, in bash, arguments should be quoted using `printf %q` or passed as distinct parameters to `exec` calls rather than concatenated into a single shell string. The agent calling the skill should also be instructed to sanitize inputs before passing them to the skill, though the primary defense should be within the skill itself. | LLM | SKILL.md:23 |
Scan History
Embed Code
[](https://skillshield.io/report/ecb96a684b2b443c)
Powered by SkillShield