Trust Assessment
memory received a trust score of 79/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 3 findings: 0 critical, 1 high, 1 medium, and 1 low severity. Key findings include Arbitrary file read via `capture.py --file`, Sensitive data exposure through `recall.py` output to LLM context, Broad file system access scope for memory operations.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Arbitrary file read via `capture.py --file` The `capture.py` script, when invoked with the `--file` argument, reads the content of any specified file and stores it in the agent's memory system. An attacker could craft a prompt that instructs the agent to call `capture.py --file /path/to/sensitive/file.txt` (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, environment variable files, etc.). The content of this sensitive file would then be stored in the agent's memory, making it accessible for later retrieval via `recall.py` and potential exfiltration by a subsequent prompt injection. Implement strict validation and sanitization of file paths provided by user input. Restrict file access to a designated, isolated memory directory. Consider using an allow-list for file extensions or specific directories. If reading arbitrary files is necessary, ensure the agent's execution environment is sandboxed and has minimal permissions. | LLM | scripts/capture.py:70 | |
| MEDIUM | Sensitive data exposure through `recall.py` output to LLM context The `recall.py` script searches the agent's memory files (daily logs, `MEMORY.md`, topic files) based on a user-provided query and outputs relevant snippets to standard output. The `SKILL.md` explicitly instructs the agent to 'READ the results (they're now in your context) THEN respond using that context.' If sensitive information is stored within the agent's memory system (e.g., through `capture.py --file` or direct user input), an attacker could craft a prompt to the agent that triggers `recall.py` with a query designed to retrieve this sensitive data. Once the data is in the LLM's context, a subsequent prompt injection could instruct the LLM to reveal or exfiltrate this information. Implement robust input validation and sanitization for `recall.py` queries to prevent retrieval of unintended sensitive data. Ensure that any data stored in the memory system is appropriately classified and access-controlled. Consider redacting or masking sensitive information before it enters the LLM's context. Implement strong prompt injection defenses for the host LLM to prevent exfiltration of context. | LLM | scripts/recall.py:200 | |
| LOW | Broad file system access scope for memory operations The `find_workspace()` function in all scripts attempts to locate a workspace by searching up the directory tree for `MEMORY.md` or a `memory/` directory. If not found, it defaults to `Path.home() / "clawd"`. This grants the memory skill read and write access to a potentially large portion of the user's home directory, beyond a dedicated, isolated skill-specific directory. While the primary operations are confined to `MEMORY_DIR` and `TOPICS_DIR` within this workspace, the broad initial search and default location increase the attack surface, especially when combined with the arbitrary file read vulnerability in `capture.py`. Restrict the skill's file system access to a dedicated, isolated directory within the skill's own package or a strictly defined data directory. Avoid searching arbitrary parent directories or defaulting to broad user home directories. Implement a more explicit and confined workspace definition. | LLM | scripts/capture.py:15 |
Scan History
Embed Code
[](https://skillshield.io/report/ec5dd4ce02e7bfbc)
Powered by SkillShield