Trust Assessment
kalshi-trader received a trust score of 88/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Prompt Injection via Untrusted API Output.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Prompt Injection via Untrusted API Output The skill fetches market data from the Kalshi API and prints fields such as `m.title` directly to standard output without explicit sanitization. If the external Kalshi API were to return malicious strings (e.g., containing LLM instructions or markdown formatting that could be interpreted as instructions) within these fields, the host LLM, upon processing the skill's output, could be manipulated into executing unintended commands, revealing sensitive information, or altering its behavior. Implement robust output sanitization or escaping for all user-facing strings fetched from external APIs before printing them. For LLM contexts, this might involve wrapping output in specific delimiters (e.g., XML tags, JSON blocks) or using LLM-aware sanitization techniques to prevent interpretation as instructions by the host LLM. | LLM | scripts/kalshi.py:20 |
Scan History
Embed Code
[](https://skillshield.io/report/cbf7a25bb4956443)
Powered by SkillShield