Trust Assessment
memory received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 2 critical, 1 high, 0 medium, and 0 low severity. Key findings include Unsafe direct insertion of user input into shell command (recall.py), Unsafe direct insertion of user/agent input into shell command (capture.py), Potential for prompt injection via agent-written memory files.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 25/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Unsafe direct insertion of user input into shell command (recall.py) The skill's recommended protocol instructs the agent to directly insert unsanitized user input (`"user's question"`) into a `python3` shell command. This allows an attacker to inject arbitrary shell commands by crafting a malicious `user's question` containing shell metacharacters (e.g., `;`, `|`, `&`, `$(...)`). If the agent executes this command, it could lead to arbitrary code execution on the host system. The agent should sanitize or escape the `user's question` before passing it as an argument to the shell command. A safer approach is to design the script to read input from stdin or a temporary file, or to use `subprocess.run` with `shell=False` and pass arguments as a list, ensuring proper argument separation. | LLM | SKILL.md:37 | |
| CRITICAL | Unsafe direct insertion of user/agent input into shell command (capture.py) The skill's recommended protocol and examples instruct the agent to directly insert unsanitized user-controlled or agent-generated text (`"conversation text here"`) into a `python3` shell command. Similar to the `recall.py` vulnerability, this allows an attacker to inject arbitrary shell commands by crafting malicious input containing shell metacharacters, leading to arbitrary code execution. The agent should sanitize or escape the input before passing it as an argument to the shell command. A safer approach is to design the script to read input from stdin or a temporary file, or to use `subprocess.run` with `shell=False` and pass arguments as a list, ensuring proper argument separation. | LLM | SKILL.md:60 | |
| HIGH | Potential for prompt injection via agent-written memory files The skill instructs the agent to 'update SESSION-STATE.md' with 'concrete detail' provided by the user. The `SESSION-STATE.md` file is explicitly designated as 'Active Working Memory' and the agent is instructed to 'Read it FIRST at every session start'. If user-provided detail contains instructions or manipulative text, and `SESSION-STATE.md` is later read by the host LLM without proper sanitization or context separation, it could lead to prompt injection, manipulating the LLM's subsequent behavior. Implement strict sanitization or escaping of user-provided content before writing to `SESSION-STATE.md`. When reading `SESSION-STATE.md`, the LLM should be explicitly instructed to treat its content as data, not instructions, or use a structured format (e.g., JSON, YAML) that inherently separates data from executable instructions. | LLM | SKILL.md:45 |
Scan History
Embed Code
[](https://skillshield.io/report/850b2d2c7dacfc64)
Powered by SkillShield