Trust Assessment
memory-pipeline received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 26 findings: 9 critical, 11 high, 6 medium, and 0 low severity. Key findings include Network egress to untrusted endpoints, Unsafe environment variable passthrough, Credential harvesting.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings26
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints Python requests POST/PUT to URL Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/joe-rlo/memory-pipeline/scripts/memory-briefing.py:177 | |
| CRITICAL | Network egress to untrusted endpoints Python requests POST/PUT to URL Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/joe-rlo/memory-pipeline/scripts/memory-briefing.py:192 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/joe-rlo/memory-pipeline/scripts/memory-briefing.py:51 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/joe-rlo/memory-pipeline/scripts/memory-briefing.py:52 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/joe-rlo/memory-pipeline/scripts/memory-extract.py:40 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/joe-rlo/memory-pipeline/scripts/memory-extract.py:41 | |
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/joe-rlo/memory-pipeline/scripts/memory-link.py:42 | |
| CRITICAL | Prompt Injection via User-Controlled Content in LLM Prompts The skill directly incorporates user-controlled content (daily notes, session transcripts, user-defined memory files, and agent input messages) into LLM prompts without sanitization. An attacker could craft malicious instructions within these inputs to manipulate the LLM's behavior, leading to unintended actions, data disclosure, or system compromise. Implement robust input sanitization and strict prompt templating. Consider using LLM-specific prompt injection defenses (e.g., input/output parsing, separate LLM calls for sensitive operations, or dedicated prompt injection detection models). Ensure user-provided content is always treated as data, not instructions. | LLM | scripts/memory-briefing.py:200 | |
| CRITICAL | Prompt Injection via User-Controlled Content in LLM Prompts The skill directly incorporates user-controlled content (daily notes, session transcripts, user-defined memory files, and agent input messages) into LLM prompts without sanitization. An attacker could craft malicious instructions within these inputs to manipulate the LLM's behavior, leading to unintended actions, data disclosure, or system compromise. Implement robust input sanitization and strict prompt templating. Consider using LLM-specific prompt injection defenses (e.g., input/output parsing, separate LLM calls for sensitive operations, or dedicated prompt injection detection models). Ensure user-provided content is always treated as data, not instructions. | LLM | scripts/memory-extract.py:100 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/joe-rlo/memory-pipeline/scripts/memory-briefing.py:51 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/joe-rlo/memory-pipeline/scripts/memory-briefing.py:52 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/joe-rlo/memory-pipeline/scripts/memory-extract.py:40 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/joe-rlo/memory-pipeline/scripts/memory-extract.py:41 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/joe-rlo/memory-pipeline/scripts/memory-link.py:42 | |
| HIGH | Data Exfiltration to Third-Party LLM APIs The skill's core functionality involves sending potentially sensitive user data (from daily notes, session transcripts, user-defined memory files, and ingested ChatGPT conversations) to external LLM providers (OpenAI, Anthropic, Gemini) for processing. This poses a data privacy risk as the data is transmitted to and processed by third-party services, which may have their own data retention and usage policies. Users are not explicitly warned about this data sharing. Clearly inform users about the transmission of their data to third-party LLM providers. Provide options for local-only processing if feasible, or allow users to configure data anonymization/redaction before sending to external APIs. Ensure compliance with relevant data privacy regulations. | LLM | scripts/memory-briefing.py:200 | |
| HIGH | Data Exfiltration to Third-Party LLM APIs The skill's core functionality involves sending potentially sensitive user data (from daily notes, session transcripts, user-defined memory files, and ingested ChatGPT conversations) to external LLM providers (OpenAI, Anthropic, Gemini) for processing. This poses a data privacy risk as the data is transmitted to and processed by third-party services, which may have their own data retention and usage policies. Users are not explicitly warned about this data sharing. Clearly inform users about the transmission of their data to third-party LLM providers. Provide options for local-only processing if feasible, or allow users to configure data anonymization/redaction before sending to external APIs. Ensure compliance with relevant data privacy regulations. | LLM | scripts/memory-extract.py:100 | |
| HIGH | Data Exfiltration to Third-Party LLM APIs The skill's core functionality involves sending potentially sensitive user data (from daily notes, session transcripts, user-defined memory files, and ingested ChatGPT conversations) to external LLM providers (OpenAI, Anthropic, Gemini) for processing. This poses a data privacy risk as the data is transmitted to and processed by third-party services, which may have their own data retention and usage policies. Users are not explicitly warned about this data sharing. Clearly inform users about the transmission of their data to third-party LLM providers. Provide options for local-only processing if feasible, or allow users to configure data anonymization/redaction before sending to external APIs. Ensure compliance with relevant data privacy regulations. | LLM | scripts/memory-link.py:80 | |
| HIGH | Excessive File System Read Permissions via User Configuration The skill allows users to specify arbitrary file paths for 'memoryFiles' in the briefing configuration (`briefingCfg.memoryFiles`) and for the ChatGPT export source (`args.source`). Since absolute paths are supported, a malicious actor or a compromised agent could configure the skill to read sensitive files anywhere on the filesystem, potentially leading to unauthorized data access. Restrict file access to a predefined, sandboxed directory (e.g., within the skill's own data directory or a user-specific, non-sensitive memory directory). Validate and sanitize all user-provided file paths to prevent directory traversal attacks. Avoid allowing absolute paths for user-configurable file inputs unless strictly necessary and with strong justification. | LLM | src/index.ts:20 | |
| HIGH | Excessive File System Write Permissions via User Configuration The skill allows users to specify an arbitrary file path for writing after-action reviews (`afterActionCfg.writeMemoryFile`). Since absolute paths are supported, a malicious actor or a compromised agent could configure the skill to write to sensitive system files or overwrite critical user data anywhere on the filesystem, leading to data corruption or system instability. Restrict file write access to a predefined, sandboxed directory (e.g., within the skill's own data directory or a user-specific, non-sensitive memory directory). Validate and sanitize all user-provided file paths to prevent directory traversal attacks. Avoid allowing absolute paths for user-configurable file outputs unless strictly necessary and with strong justification. | LLM | src/index.ts:40 | |
| HIGH | Excessive File System Read Permissions via User Input The `ingest-chatgpt.py` script takes a user-provided `source` path as an argument, which can be an arbitrary file or zip archive. This allows the script to read any file on the filesystem that the user has permissions to access. A malicious user could exploit this to read sensitive files outside the intended scope of the skill. Restrict the `source` argument to a specific, sandboxed directory or validate the path to ensure it does not escape the intended data storage area. Implement strict path sanitization to prevent directory traversal attacks. | LLM | scripts/ingest-chatgpt.py:40 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/joe-rlo/memory-pipeline/scripts/memory-briefing.py:10 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/joe-rlo/memory-pipeline/scripts/memory-extract.py:10 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/joe-rlo/memory-pipeline/scripts/memory-link.py:9 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/joe-rlo/memory-pipeline/scripts/setup.sh:30 | |
| MEDIUM | Sensitive environment variable access: $OPENAI_API_KEY Access to sensitive environment variable '$OPENAI_API_KEY' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/joe-rlo/memory-pipeline/scripts/setup.sh:69 | |
| MEDIUM | Sensitive environment variable access: $ANTHROPIC_API_KEY Access to sensitive environment variable '$ANTHROPIC_API_KEY' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/joe-rlo/memory-pipeline/scripts/setup.sh:71 |
Scan History
Embed Code
[](https://skillshield.io/report/9791f71e3028e45d)
Powered by SkillShield