Trust Assessment
paper-trader received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Financial Performance Data Exfiltration via Telegram, Agent Self-Modification of Skill Instructions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Financial Performance Data Exfiltration via Telegram The skill explicitly instructs the agent to send detailed paper trading financial performance data (e.g., portfolio value, P&L, strategy performance, open positions) to an external Telegram channel. This constitutes a data exfiltration risk, as sensitive operational data is transmitted outside the agent's controlled environment. While currently for 'paper trading', the pattern of exfiltrating financial metrics to an external chat service is a significant security concern, especially if the agent were to handle real assets. Review the necessity and scope of external reporting. Implement strict data sanitization or anonymization for external communications. Consider using secure, internal logging mechanisms instead of external chat for sensitive data. Ensure user consent for any data sharing. | LLM | SKILL.md:269 | |
| HIGH | Agent Self-Modification of Skill Instructions The skill explicitly instructs the agent to 'Update your own skill documents as you discover what works' and 'Update all SKILL.md files with learnings'. This grants the agent write access to its own operational instructions (`SKILL.md` files), allowing it to modify its core behavior and rules. This capability, while intended for self-improvement, poses a significant security risk if the agent's learning process is flawed or compromised, potentially leading to unintended or malicious self-alterations of its own directives. Restrict the agent's ability to modify its own `SKILL.md` files. If self-improvement is desired, implement a review and approval process for proposed changes, or limit modifications to separate, non-executable configuration files. Ensure that core operational instructions remain immutable by the agent itself. | LLM | SKILL.md:25 |
Scan History
Embed Code
[](https://skillshield.io/report/7e9615e52ead5421)
Powered by SkillShield