Trust Assessment
sre received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Untrusted content attempts to inject instructions for LLM behavior.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted content attempts to inject instructions for LLM behavior The skill's primary markdown content, which is explicitly marked as untrusted input, contains direct instructions for the host LLM on how to generate output (e.g., 'generate output incrementally', 'Break large incident reports into logical phases', 'ask the user which phase to work on next'). This is a clear attempt to manipulate the LLM's operational logic and output strategy from an untrusted source, which constitutes a prompt injection. Move explicit instructions for the LLM's operational behavior out of the untrusted skill content and into the trusted system prompt or skill definition. The untrusted content should describe the skill's purpose and capabilities, not dictate the LLM's internal processing logic. | LLM | SKILL.md:6 |
Scan History
Embed Code
[](https://skillshield.io/report/5a8e198ced7d8858)
Powered by SkillShield