Trust Assessment
dory-memory received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Skill design enables persistent prompt injection, Shell command present in setup instructions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Skill design enables persistent prompt injection The 'Dory-Proof Pattern' explicitly instructs the AI agent to 'IMMEDIATELY write their EXACT WORDS to `state/ACTIVE.md`'. This design means that any malicious instructions or prompt injection attempts by a user will be captured verbatim and stored on disk. When the agent later reads `state/ACTIVE.md` as part of its boot sequence, these malicious instructions will be re-injected into the agent's context, potentially leading to persistent manipulation of the LLM's behavior. This creates a credible exploit path for an attacker to maintain control over the agent across sessions. Implement input sanitization or validation before writing user input to `state/ACTIVE.md`. Alternatively, ensure the agent's parsing of `state/ACTIVE.md` distinguishes between user input and agent instructions, and does not execute user input as commands. | LLM | SKILL.md:12 | |
| HIGH | Shell command present in setup instructions The 'Quick Setup' section contains a direct shell command: `cp -r skills/dory-memory/assets/templates/* ~/.openclaw/workspace/`. If the AI agent is configured to interpret and execute shell commands found within skill documentation or instructions, this presents a command injection vulnerability. An attacker could potentially modify the skill's content (e.g., via a supply chain attack) to include malicious commands, or if the agent is prompted to 'set itself up', it might execute this command, leading to arbitrary code execution on the host system. Ensure that AI agents are strictly prohibited from executing shell commands found within skill documentation. If setup requires shell commands, these should be executed by a human operator or a trusted, sandboxed environment, not by the AI agent itself. | LLM | SKILL.md:84 |
Scan History
Embed Code
[](https://skillshield.io/report/7725bffbe1773089)
Powered by SkillShield