Trust Assessment
social-memory received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Sensitive environment variable access: $HOME, Potential Prompt Injection via Stored User Content.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Prompt Injection via Stored User Content The 'social-memory' skill allows storing arbitrary, unsanitized user-provided text in the 'notes' field (via the 'add' command) and 'interaction notes' field (via the 'log' command). When this stored content is later retrieved and presented to the host LLM (e.g., via the 'get' command), a malicious user could embed instructions or manipulative text that could influence the LLM's subsequent behavior, leading to prompt injection. The `social.sh` script itself does not sanitize these inputs before storage or display. The skill is designed to store user-provided text. The primary remediation lies with the LLM's integration layer: 1. Implement robust output sanitization or escaping of user-generated content before feeding it back into the LLM's context. 2. Clearly separate user-generated content from system instructions in the LLM's prompt structure. 3. Consider filtering or flagging known prompt injection patterns in stored data. 4. Limit the LLM's capabilities when processing potentially untrusted user-generated content. | LLM | social.sh:36 | |
| HIGH | Potential Prompt Injection via Stored User Content The 'social-memory' skill allows storing arbitrary, unsanitized user-provided text in the 'notes' field (via the 'add' command) and 'interaction notes' field (via the 'log' command). When this stored content is later retrieved and presented to the host LLM (e.g., via the 'get' command), a malicious user could embed instructions or manipulative text that could influence the LLM's subsequent behavior, leading to prompt injection. The `social.sh` script itself does not sanitize these inputs before storage or display. The skill is designed to store user-provided text. The primary remediation lies with the LLM's integration layer: 1. Implement robust output sanitization or escaping of user-generated content before feeding it back into the LLM's context. 2. Clearly separate user-generated content from system instructions in the LLM's prompt structure. 3. Consider filtering or flagging known prompt injection patterns in stored data. 4. Limit the LLM's capabilities when processing potentially untrusted user-generated content. | LLM | social.sh:70 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/luluf0x/social-memory/social.sh:5 |
Scan History
Embed Code
[](https://skillshield.io/report/c6deaad8fad42920)
Powered by SkillShield