Trust Assessment
screenshot-capture received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Command Injection in file copy operation, Prompt Injection via untrusted input written to LLM knowledge base files, Prompt Injection via untrusted input used in reminder text.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection in file copy operation The skill constructs a `cp` command using placeholders `[inbound image]` and `[descriptive-name]`. If these placeholders are populated directly from untrusted user input (e.g., Enzo's comments or image metadata), an attacker could inject shell commands by crafting malicious filenames (e.g., `'; rm -rf /'` or `$(cat /etc/passwd)`). This allows for arbitrary command execution on the host system. Sanitize or validate all user-provided input used in shell commands. Prefer using safe APIs or libraries that handle file operations without direct shell execution. If shell execution is unavoidable, strictly whitelist allowed characters and escape all special shell characters in user-provided strings. | LLM | SKILL.md:10 | |
| HIGH | Prompt Injection via untrusted input written to LLM knowledge base files The skill writes user-provided content (Enzo's commentary, extracted content, category, topic, description, intent signal) directly into markdown files (`notes/frameworks.md`, `notes/ideas.md`, `notes/patterns.md`). These files are likely intended to serve as a knowledge base or RAG source for the LLM. An attacker could embed malicious instructions or data within their input, which would then be stored in these files. When the LLM later reads these files as part of its context, these hidden instructions could manipulate its behavior, leading to prompt injection. Implement strict sanitization and validation of all user-provided content before writing it to files that will be used as LLM context. Consider using a separate, isolated storage for raw untrusted input, and only feed a sanitized, summarized, or structured version to the LLM. Implement content filtering for keywords or patterns indicative of prompt injection. | LLM | SKILL.md:28 | |
| MEDIUM | Prompt Injection via untrusted input used in reminder text The skill constructs reminder text using untrusted input from Enzo's comments or extracted content (e.g., `[framework]`, `[hack]`, `[idea]`). If the reminder system is LLM-driven or if the reminder text is later processed by an LLM, an attacker could inject malicious instructions into these placeholders. This could lead to the LLM performing unintended actions when processing the reminder, or manipulating the reminder system itself if it has programmable capabilities. Sanitize and validate all user-provided input before incorporating it into reminder text. If the reminder system is LLM-driven, ensure it has robust prompt injection defenses. If the reminder system executes commands, treat this as a command injection vector and apply appropriate sanitization. | LLM | SKILL.md:42 |
Scan History
Embed Code
[](https://skillshield.io/report/c84c90a81334eebe)
Powered by SkillShield