Trust Assessment
memory-keeper received a trust score of 37/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 5 findings: 1 critical, 2 high, 1 medium, and 1 low severity. Key findings include Arbitrary command execution, Dangerous call: subprocess.run(), Git push to arbitrary remote can exfiltrate data and use credentials.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/crimsondevil333333/memory-keeper/scripts/memory_sync.py:25 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'run_git_command'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/crimsondevil333333/memory-keeper/scripts/memory_sync.py:25 | |
| HIGH | Git push to arbitrary remote can exfiltrate data and use credentials The `memory_sync.py` script allows specifying an arbitrary Git remote URL via the `--remote` argument and pushing to it via `--push`. If a malicious actor or a compromised LLM provides a URL to a server they control, the agent's memory files (which may contain sensitive information) will be pushed to that remote. Furthermore, if the user's Git client is configured with credentials (e.g., SSH keys, HTTPS tokens), these credentials will be used to authenticate with the malicious remote, potentially leading to their exposure or misuse. This constitutes both data exfiltration and indirect credential harvesting. Implement a whitelist or strict validation for remote URLs, if possible, to restrict pushes to trusted repositories. Warn the user explicitly about the security implications of pushing to untrusted remotes. Consider requiring explicit user confirmation for pushes to new or untrusted remotes. Ensure that sensitive information is not stored in memory files that are subject to synchronization. | LLM | scripts/memory_sync.py:109 | |
| MEDIUM | Arbitrary workspace and target paths allow broad file system access The skill allows users to specify `--workspace` and `--target` paths, which are then used for copying files. While the default paths are relatively safe, a malicious prompt could instruct the agent to set `--workspace` to a sensitive system directory (e.g., `/`, `~`) and use `--allow-extra "*"` to copy a wide range of files. This could lead to data exfiltration of sensitive system files or data corruption if the `--target` path points to a critical location. Implement path sanitization or restrictions to prevent `--workspace` and `--target` from pointing to critical system directories. Consider a default deny-list for sensitive paths or require explicit confirmation for operations outside a defined safe zone. Limit the scope of `allow-extra` patterns or provide a warning for broad patterns. | LLM | scripts/memory_sync.py:160 | |
| LOW | User-controlled input written to memory log file The `log_memory_update` function writes user-controlled values from `--target` and `--remote` arguments into a markdown log file within the agent's `memory/` directory. If these arguments contain carefully crafted prompt injection instructions, and the host LLM later reads these log files as part of its context, it could potentially be manipulated. While the risk of direct execution is low, it represents a potential vector for subtle manipulation. Sanitize or escape user-provided strings (`target`, `remote`) before writing them to files that might be consumed by an LLM. Consider truncating or hashing long or complex user inputs before logging to prevent large injection payloads. | LLM | scripts/memory_sync.py:90 |
Scan History
Embed Code
[](https://skillshield.io/report/57e9210b8a59936d)
Powered by SkillShield