Trust Assessment
prediction-trader received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via User Input.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via User Input The `analyze_topic` function directly interpolates user-controlled input (`topic`) into an f-string that forms the prompt for the `agent.chat()` method. This allows an attacker to inject arbitrary instructions into the LLM's prompt, potentially overriding system instructions, extracting sensitive information, or manipulating the LLM's behavior. Sanitize or escape user input before interpolating it into the LLM prompt. Ideally, separate user input from system instructions by passing the user's query as a distinct parameter to the LLM API (e.g., as a 'user message' or 'tool input') rather than embedding it directly within the system prompt or tool instructions. If direct interpolation is unavoidable, implement robust input validation and escaping to neutralize potential injection attempts. | LLM | scripts/trader.py:216 |
Scan History
Embed Code
[](https://skillshield.io/report/d252fba0c1ae0252)
Powered by SkillShield