Trust Assessment
memory-pipeline received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 21 findings: 10 critical, 8 high, 3 medium, and 0 low severity. Key findings include Network egress to untrusted endpoints, Unsafe environment variable passthrough, Credential harvesting.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings21
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints Python requests POST/PUT to URL Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/bodii88/memory-pipeline-0-1-0/scripts/memory-briefing.py:177 | |
| CRITICAL | Network egress to untrusted endpoints Python requests POST/PUT to URL Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/bodii88/memory-pipeline-0-1-0/scripts/memory-briefing.py:192 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/bodii88/memory-pipeline-0-1-0/scripts/memory-briefing.py:51 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/bodii88/memory-pipeline-0-1-0/scripts/memory-briefing.py:52 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/bodii88/memory-pipeline-0-1-0/scripts/memory-extract.py:40 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/bodii88/memory-pipeline-0-1-0/scripts/memory-extract.py:41 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/bodii88/memory-pipeline-0-1-0/scripts/memory-link.py:42 | |
| CRITICAL | Prompt Injection via User/File Content in LLM Prompts The skill constructs LLM prompts by directly embedding content from various files (e.g., daily notes, session transcripts, SOUL.md, USER.md) and user input (ctx.input?.message). This content is not sanitized or validated before being inserted into the prompt. An attacker could craft malicious instructions within these sources to manipulate the LLM's behavior, leading to unintended actions, data leakage, or denial of service. Implement robust input validation and sanitization for all user-controlled or file-derived content before it is embedded into LLM prompts. Consider using a separate, instruction-following LLM for prompt generation and a content-generating LLM for actual output, or employ prompt templating engines that strictly separate instructions from data. Ensure that the LLM is sandboxed and cannot execute arbitrary code or access sensitive resources. | LLM | scripts/memory-briefing.py:150 | |
| CRITICAL | Prompt Injection via User/File Content in LLM Prompts The skill constructs LLM prompts by directly embedding content from various files (e.g., daily notes, session transcripts, SOUL.md, USER.md) and user input (ctx.input?.message). This content is not sanitized or validated before being inserted into the prompt. An attacker could craft malicious instructions within these sources to manipulate the LLM's behavior, leading to unintended actions, data leakage, or denial of service. Implement robust input validation and sanitization for all user-controlled or file-derived content before it is embedded into LLM prompts. Consider using a separate, instruction-following LLM for prompt generation and a content-generating LLM for actual output, or employ prompt templating engines that strictly separate instructions from data. Ensure that the LLM is sandboxed and cannot execute arbitrary code or access sensitive resources. | LLM | scripts/memory-extract.py:75 | |
| CRITICAL | Prompt Injection via User/File Content in LLM Prompts The `before_agent_start` hook directly injects a 'briefing packet' into the agent's system prompt. This packet includes `memoryText` (loaded from user-controlled files) and `taskHint` (from user input `ctx.input?.message`). This direct injection without sanitization creates a critical prompt injection vulnerability, allowing an attacker to manipulate the agent's instructions. Implement robust input validation and sanitization for all user-controlled or file-derived content before it is embedded into LLM prompts. Consider using a separate, instruction-following LLM for prompt generation and a content-generating LLM for actual output, or employ prompt templating engines that strictly separate instructions from data. Ensure that the LLM is sandboxed and cannot execute arbitrary code or access sensitive resources. | LLM | src/index.ts:26 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/bodii88/memory-pipeline-0-1-0/scripts/memory-briefing.py:51 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/bodii88/memory-pipeline-0-1-0/scripts/memory-briefing.py:52 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/bodii88/memory-pipeline-0-1-0/scripts/memory-extract.py:40 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/bodii88/memory-pipeline-0-1-0/scripts/memory-extract.py:41 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/bodii88/memory-pipeline-0-1-0/scripts/memory-link.py:42 | |
| HIGH | Command Injection via Unsanitized Workspace Path in Shell Commands The `HEARTBEAT.md` section explicitly instructs the agent to execute shell commands like `cd {workspace} && python3 ...`. If the `{workspace}` variable, which can be derived from the `CLAWDBOT_WORKSPACE` environment variable, contains shell metacharacters (e.g., `;`, `&`, `|`, `$()`), it could lead to arbitrary command injection. While the Python scripts attempt to determine a workspace, the `HEARTBEAT.md` provides a direct instruction for shell execution. Avoid direct concatenation of user-controlled or environment-derived variables into shell commands. If shell execution is necessary, use a subprocess library (e.g., Python's `subprocess.run` with `shell=False`) and pass arguments as a list to prevent shell interpretation. Ensure that the `CLAWDBOT_WORKSPACE` environment variable is strictly validated and sanitized if it can be influenced by untrusted sources. | LLM | SKILL.md:120 | |
| HIGH | Path Traversal and Arbitrary File Read/Write via Configurable File Paths The skill allows reading and writing to file paths that are constructed using `workspaceRoot` and configurable values (`briefingCfg.memoryFiles` for reading, `afterActionCfg.writeMemoryFile` for writing). If these configuration values can be manipulated (e.g., through prompt injection that modifies agent configuration or if the configuration itself is untrusted), an attacker could use path traversal sequences (e.g., `../../`) or absolute paths to read or write arbitrary files outside the intended `workspaceRoot`. The `fs.mkdir(..., { recursive: true })` call further exacerbates the write vulnerability by allowing arbitrary directory creation. Strictly validate and sanitize all file paths derived from configuration or user input. Ensure that paths are canonicalized and do not contain path traversal sequences. Restrict file operations to a tightly controlled, sandboxed directory. Implement an allowlist for file extensions or specific file names if possible. The `workspaceRoot` should be a dedicated, isolated directory for the agent. | LLM | src/memory.ts:4 | |
| HIGH | Path Traversal and Arbitrary File Read/Write via Configurable File Paths The skill allows reading and writing to file paths that are constructed using `workspaceRoot` and configurable values (`briefingCfg.memoryFiles` for reading, `afterActionCfg.writeMemoryFile` for writing). If these configuration values can be manipulated (e.g., through prompt injection that modifies agent configuration or if the configuration itself is untrusted), an attacker could use path traversal sequences (e.g., `../../`) or absolute paths to read or write arbitrary files outside the intended `workspaceRoot`. The `fs.mkdir(..., { recursive: true })` call further exacerbates the write vulnerability by allowing arbitrary directory creation. Strictly validate and sanitize all file paths derived from configuration or user input. Ensure that paths are canonicalized and do not contain path traversal sequences. Restrict file operations to a tightly controlled, sandboxed directory. Implement an allowlist for file extensions or specific file names if possible. The `workspaceRoot` should be a dedicated, isolated directory for the agent. | LLM | src/memory.ts:17 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/bodii88/memory-pipeline-0-1-0/scripts/memory-briefing.py:10 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/bodii88/memory-pipeline-0-1-0/scripts/memory-extract.py:10 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/bodii88/memory-pipeline-0-1-0/scripts/memory-link.py:9 |
Scan History
Embed Code
[](https://skillshield.io/report/474c9b8d80ba919c)
Powered by SkillShield