Trust Assessment
context-anchor received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Direct inclusion of untrusted file content into LLM prompt, Arbitrary filesystem read access via configurable WORKSPACE.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Direct inclusion of untrusted file content into LLM prompt The `scripts/anchor.sh` skill reads the content of various user-controlled files (e.g., `memory/current-task.md`, `context/active/*.md`, daily log files) and directly outputs them as part of a 'briefing' intended for the host LLM. If an attacker can write to these files, they can inject arbitrary text, including malicious instructions, into the LLM's context. This allows for prompt injection attacks, potentially leading to unauthorized actions or data exfiltration from the LLM itself. Implement robust sanitization or escaping of all user-controlled file content before it is presented to the LLM. This might involve using a templating engine that automatically escapes output, or explicitly filtering out known prompt injection keywords and patterns. Alternatively, consider summarizing file content rather than directly including it, or requiring explicit user confirmation for sensitive content. | LLM | scripts/anchor.sh:120 | |
| HIGH | Arbitrary filesystem read access via configurable WORKSPACE The `WORKSPACE` environment variable can be overridden by the user to point to any directory on the filesystem. The script then constructs `MEMORY_DIR` and `CONTEXT_DIR` relative to this `WORKSPACE`. This allows the skill to read files from arbitrary locations on the system (e.g., `/etc/passwd`, `/root/.ssh/id_rsa`, etc.) if the `WORKSPACE` is set to a parent directory or a symlink. Although the script only reads files and outputs their content to stdout, this constitutes a data exfiltration risk if the LLM is instructed to run the skill with a malicious `WORKSPACE` path and the output is captured. Restrict the `WORKSPACE` variable to a predefined, safe directory (e.g., within the skill's own directory or a dedicated sandbox). If arbitrary `WORKSPACE` paths are necessary, implement strict path validation to ensure it remains within an allowed scope and does not traverse outside designated safe areas (e.g., disallow `..`, absolute paths outside a root, or symlinks). Alternatively, run the script in a containerized environment with restricted filesystem access. | LLM | scripts/anchor.sh:10 |
Scan History
Embed Code
[](https://skillshield.io/report/2259f56d0d615d00)
Powered by SkillShield