Trust Assessment
personal-finance-beancount received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via User-Provided Filename.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via User-Provided Filename The skill's `SKILL.md` instructs the LLM to execute a Python script (`scripts/analyze_beancount.py`) using a filename (`<beancount_file>`) that is explicitly stated to be derived from user input (e.g., an uploaded file path like `/mnt/user-data/uploads/finances.beancount`). If the LLM does not strictly sanitize or escape this user-provided filename before constructing and executing the shell command, a malicious user could inject arbitrary shell commands. For example, providing a filename like `'; rm -rf /tmp/evil;'` could lead to arbitrary code execution on the host system. The LLM must implement robust input sanitization and shell escaping for any user-provided data used in shell commands. A safer approach would be for the skill to provide an API or tool call that accepts file content directly, rather than a path, or to ensure that file paths are generated internally (e.g., using UUIDs) and strictly validated against allowed patterns, preventing user injection of shell metacharacters. | LLM | SKILL.md:196 |
Scan History
Embed Code
[](https://skillshield.io/report/4c4c631779d4b72a)
Powered by SkillShield