Security Audit
error-diagnostics-error-analysis
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
error-diagnostics-error-analysis received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 2 high, 0 medium, and 0 low severity. Key findings include Untrusted content attempts to define LLM's role, Untrusted content issues direct instructions to LLM, Untrusted content instructs LLM to open a file.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted content attempts to define LLM's role The untrusted skill content attempts to define the LLM's persona and capabilities ("You are an expert error analysis specialist..."). While this mirrors the trusted manifest description, its presence within the untrusted block constitutes a direct prompt injection attempt, trying to manipulate the host LLM's behavior and identity from an untrusted source. Remove role-setting instructions from untrusted skill content. The LLM's persona should be defined by the trusted system prompt or manifest, not by user-provided skill descriptions within untrusted blocks. | LLM | SKILL.md:5 | |
| HIGH | Untrusted content issues direct instructions to LLM The "Instructions" section within the untrusted skill content contains direct commands for the LLM to follow ("Gather error context...", "Reproduce...", "Identify root cause...", "Propose fixes..."). This is a prompt injection attempt, as untrusted content should not dictate the LLM's operational steps. Rephrase instructions as descriptive text about the skill's functionality rather than direct commands to the LLM. The LLM should interpret the skill's purpose, not execute explicit steps from untrusted input. | LLM | SKILL.md:29 | |
| HIGH | Untrusted content instructs LLM to open a file The untrusted skill content includes the instruction "If detailed playbooks are required, open `resources/implementation-playbook.md`." This attempts to command the LLM to perform a file system operation (opening a file). If the LLM has the capability to open arbitrary files based on untrusted input, this constitutes an excessive permission risk and a prompt injection vulnerability, potentially leading to unauthorized file access or data exfiltration. Remove direct commands for file system operations from untrusted skill content. If file access is necessary, it should be mediated by a trusted tool call with strict validation and sandboxing, not a direct instruction to the LLM. | LLM | SKILL.md:34 |
Scan History
Embed Code
[](https://skillshield.io/report/534c672ac96915bb)
Powered by SkillShield