Trust Assessment
agent-memory-patterns received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 3 findings: 0 critical, 0 high, 3 medium, and 0 low severity. Key findings include Potential Prompt Injection via Stored User Input, Regular Expression Denial of Service (ReDoS) in Search Function.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Potential Prompt Injection via Stored User Input The skill stores user-provided content directly into markdown files (`memory-logger.sh` writes to daily memory files, `external-content-queue.sh` writes to `pending-memories.md`). If a malicious LLM prompt causes the agent to store crafted instructions or data, and these files are later read and interpreted by the LLM or another component, it could lead to prompt injection or unintended actions. While the skill itself doesn't execute this content, it provides a vector for persistent injection. Implement sanitization or strict validation of user-provided strings before writing them to memory files, especially if these files are later used as input to an LLM or other interpretive systems. Consider encoding special characters or using a structured data format that prevents arbitrary interpretation. | LLM | SKILL.md:72 | |
| MEDIUM | Potential Prompt Injection via Stored User Input The skill stores user-provided content directly into markdown files (`memory-logger.sh` writes to daily memory files, `external-content-queue.sh` writes to `pending-memories.md`). If a malicious LLM prompt causes the agent to store crafted instructions or data, and these files are later read and interpreted by the LLM or another component, it could lead to prompt injection or unintended actions. While the skill itself doesn't execute this content, it provides a vector for persistent injection. Implement sanitization or strict validation of user-provided strings before writing them to memory files, especially if these files are later used as input to an LLM or other interpretive systems. Consider encoding special characters or using a structured data format that prevents arbitrary interpretation. | LLM | SKILL.md:268 | |
| MEDIUM | Regular Expression Denial of Service (ReDoS) in Search Function The `contextual_search` function constructs a regular expression pattern from user-provided keywords (`$@`) and uses it with `grep -E`. A malicious user could provide a crafted, complex regex pattern (e.g., `(a+)+b`) that causes `grep` to consume excessive CPU resources, leading to a denial of service for the agent or the underlying system. Sanitize user-provided keywords to escape all regular expression metacharacters before constructing the pattern for `grep -E`. Alternatively, use a search mechanism that does not rely on user-controlled regular expressions, or implement a timeout for the `grep` command if supported by the environment. | LLM | SKILL.md:198 |
Scan History
Embed Code
[](https://skillshield.io/report/20da1301e1525971)
Powered by SkillShield