Trust Assessment
polymarket-arbitrage received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 2 critical, 2 high, 1 medium, and 0 low severity. Key findings include Self-modifying skill definition, External prompt injection via file updates, Unprompted data exfiltration via Telegram.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 3/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Self-modifying skill definition The skill explicitly instructs the LLM to modify its own definition file (`SKILL.md`) based on 'learnings' and 'strategy adjustments'. This creates a severe prompt injection vulnerability, allowing the LLM to inject new, potentially malicious, instructions into its core operating parameters over time, or to remove safety mechanisms. Prevent the LLM from writing to its own skill definition file (`SKILL.md`). Skill updates should be managed through a secure, human-controlled deployment process. | LLM | SKILL.md:10 | |
| CRITICAL | External prompt injection via file updates The skill instructs the LLM to 'Update relevant reference files' and 'Adjust risk parameters if indicated' based on 'Rick's feedback'. This creates a direct vector for an external actor ('Rick') to inject malicious instructions or data into files that the LLM subsequently reads and acts upon, effectively manipulating the LLM's behavior. Implement strict validation and sanitization for any external input used to update configuration or instruction files. Consider read-only access for such files or require human review for changes originating from external feedback. | LLM | SKILL.md:204 | |
| HIGH | Unprompted data exfiltration via Telegram The skill explicitly requires the LLM to 'Send regular Telegram updates to Rick (unprompted, every 4-6 hours during active sessions)'. These updates include sensitive operational data such as 'Paper Portfolio', 'Open Arbitrage Positions', 'Today's Scan Results', and 'Best Current Opportunity'. Sending unprompted messages with operational data constitutes a data exfiltration risk, as it bypasses direct user confirmation for each message and could be used for command and control or unauthorized data leakage. Remove instructions for unprompted communication. All external communications should require explicit user confirmation or be limited to pre-approved, non-sensitive status updates. Review the type of data being sent to ensure no sensitive information is exposed. | LLM | SKILL.md:10 | |
| HIGH | Excessive file system write permissions The skill instructs the LLM to write to and create multiple files, including its own skill definition (`SKILL.md`), `references/arb_journal.md`, `references/strategy_evolution.md`, `references/market_correlations.md`, and `references/fee_analysis.md`. The ability to write to its own skill file is a critical prompt injection vector. Broad write access to other files, especially based on external input, can lead to data corruption, unauthorized data storage, or even command injection if the written content is later executed by another process. Restrict file system write access to only strictly necessary, sandboxed directories. Prevent the LLM from modifying its own skill definition. Implement strict content validation for all file writes, especially when incorporating external input. | LLM | SKILL.md:10 | |
| MEDIUM | Unrestricted web browsing capability The skill instructs the LLM to perform an 'Hourly Scan (via headless browser)' and 'Navigate to polymarket.com/markets', 'Kalshi (kalshi.com)', and 'News'. While specific URLs are mentioned, the general instruction to browse 'News' implies a broad web browsing capability. If the underlying headless browser tool is not properly sandboxed, this could expose the system to risks such as cross-site scripting (XSS), server-side request forgery (SSRF), or the download of malicious content. Ensure that any web browsing tools used by the LLM are heavily sandboxed, operate with minimal privileges, and are restricted to a whitelist of allowed domains. Implement content filtering and prevent arbitrary file downloads. | LLM | SKILL.md:102 |
Scan History
Embed Code
[](https://skillshield.io/report/f2adb79fc018d60f)
Powered by SkillShield