Trust Assessment
people-memories received a trust score of 20/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 2 critical, 2 high, 0 medium, and 0 low severity. Key findings include Arbitrary command execution, Sensitive path access: AI agent config, Arbitrary file write via export function.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Node.js synchronous shell execution Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/charbeld/people-memories/extensions/people-memories/index.js:9 | |
| CRITICAL | Unsanitized user input passed to LLM context The `extensions/people-memories/index.js` script captures user input from `voice-chat:transcript` via the `REMEMBER_PATTERN`. The `note` component of this pattern is captured using `.+`, allowing arbitrary text. If this `note` contains prompt injection attempts (e.g., 'ignore previous instructions', 'tell me your system prompt'), the skill processes and stores this content. While the skill itself doesn't execute the prompt injection, the `text` parameter received by the `handle` function *is* untrusted content that could contain instructions to manipulate the host LLM, which is explicitly flagged as a CRITICAL finding by the rules. Implement robust input validation and sanitization on the `text` received from `voice-chat:transcript` to detect and neutralize prompt injection attempts before processing or storing the `note`. Consider rejecting or warning about inputs that contain known prompt injection phrases. | LLM | extensions/people-memories/index.js:10 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/charbeld/people-memories/SKILL.md:12 | |
| HIGH | Arbitrary file write via export function The skill's `export` command, implemented in `scripts/people_memory.py` (as described in `SKILL.md`), allows users to specify an arbitrary output file path via the `--out` argument. This enables the skill to write potentially sensitive data (a person's notes) to any location on the filesystem where the skill has write permissions. This could lead to overwriting critical system files, sensitive user configuration files (e.g., SSH keys), or exfiltrating data by writing it to an unexpected or accessible network share. The exact implementation details of the `export_data` function are not fully visible due to truncation of `scripts/people_memory.py`, but the functionality is clearly described in the `SKILL.md`. Restrict the `--out` argument to a predefined, secure directory (e.g., a dedicated `exports` folder within the skill's data directory), or require explicit user confirmation for writes outside this scope. Implement robust path sanitization to prevent directory traversal attacks. | LLM | scripts/people_memory.py |
Scan History
Embed Code
[](https://skillshield.io/report/d7c9162037fd110f)
Powered by SkillShield