Trust Assessment
promitheus received a trust score of 70/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Explicit Prompt Injection Vector via `promitheus_inject` tool.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Explicit Prompt Injection Vector via `promitheus_inject` tool The `promitheus_inject` tool is explicitly described as writing `STATE.md` 'for prompt injection'. The `STATE.md` file is then 'auto-injected' into the LLM's context at session start. This design allows the agent (or an attacker who can control the agent's tool usage) to write arbitrary instructions or data into the LLM's prompt, leading to direct prompt injection. This could be used to manipulate the LLM's behavior, extract sensitive information, or bypass safety mechanisms. Remove or severely restrict the `promitheus_inject` tool. If the goal is to provide internal state, ensure the content written to `STATE.md` is strictly structured data (e.g., JSON, YAML) that is parsed and validated, rather than free-form text directly injected into the prompt. Implement strict sanitization and validation of any content written to `STATE.md` to prevent it from being interpreted as instructions by the LLM. Consider using a dedicated internal state mechanism that doesn't rely on file-based prompt injection. | LLM | SKILL.md:24 |
Scan History
Embed Code
[](https://skillshield.io/report/51634eed0580cfba)
Powered by SkillShield