Trust Assessment
symbolic-memory received a trust score of 32/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 2 critical, 0 high, 1 medium, and 1 low severity. Key findings include Network egress to untrusted endpoints, Suspicious import: requests, Prompt Injection via User Query.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 68/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/th3hypn0tist/symbolic-memory/symbolic_memory.py:5 | |
| CRITICAL | Prompt Injection via User Query The `args.query` parameter, which is user-controlled input, is directly interpolated into the prompt sent to the Ollama LLM without any sanitization or escaping. This allows an attacker to inject arbitrary instructions or data into the LLM's input, potentially overriding the skill's intended behavior (e.g., 'Use only the facts below. Do not invent new information.') or causing the LLM to generate harmful or unintended responses. Implement robust input sanitization for `args.query` before embedding it into the prompt. Consider using a structured prompt format (e.g., system/user roles) if supported by the LLM, or explicitly delimit user input within the prompt with instructions for the LLM to treat it as literal input, not instructions. | LLM | symbolic_memory.py:40 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/th3hypn0tist/symbolic-memory/symbolic_memory.py:2 | |
| LOW | Unpinned Python Dependencies The skill uses external Python libraries (`requests`, `psycopg2`) without specifying exact versions. This introduces a supply chain risk, as future updates to these libraries could introduce vulnerabilities, breaking changes, or even malicious code if a dependency is compromised. While these are common libraries, pinning versions is a best practice for security and stability. Pin exact versions for all external Python dependencies in a `requirements.txt` file or similar dependency management system. Regularly review and update these pinned versions to incorporate security patches. | LLM | symbolic_memory.py:2 |
Scan History
Embed Code
[](https://skillshield.io/report/8cd608817cf1d842)
Powered by SkillShield