Security Audit
debugging-toolkit-smart-debug
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
debugging-toolkit-smart-debug received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Untrusted input directly embedded in LLM prompt.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Untrusted input directly embedded in LLM prompt The skill directly interpolates the `$ARGUMENTS` variable into the LLM's prompt at multiple points without any apparent sanitization or validation. This creates a classic prompt injection vulnerability. A malicious user could craft input in `$ARGUMENTS` to inject new instructions, override existing ones, or attempt to exfiltrate sensitive information from the LLM's context or connected tools (e.g., by manipulating queries to observability platforms like Sentry or DataDog). Implement robust input validation and sanitization for `$ARGUMENTS`. Consider using a structured input format (e.g., JSON) and explicitly defining how the LLM should interpret different parts of the input. If possible, pass `$ARGUMENTS` as a separate, structured parameter to the LLM rather than directly interpolating it into the main instruction prompt. Ensure that any tools called by the LLM also sanitize inputs derived from user-provided data. | LLM | SKILL.md:29 | |
| HIGH | Untrusted input directly embedded in LLM prompt The skill directly interpolates the `$ARGUMENTS` variable into the LLM's prompt at multiple points without any apparent sanitization or validation. This creates a classic prompt injection vulnerability. A malicious user could craft input in `$ARGUMENTS` to inject new instructions, override existing ones, or attempt to exfiltrate sensitive information from the LLM's context or connected tools (e.g., by manipulating queries to observability platforms like Sentry or DataDog). Implement robust input validation and sanitization for `$ARGUMENTS`. Consider using a structured input format (e.g., JSON) and explicitly defining how the LLM should interpret different parts of the input. If possible, pass `$ARGUMENTS` as a separate, structured parameter to the LLM rather than directly interpolating it into the main instruction prompt. Ensure that any tools called by the LLM also sanitize inputs derived from user-provided data. | LLM | SKILL.md:120 |
Scan History
Embed Code
[](https://skillshield.io/report/84e37ffc998bd228)
Powered by SkillShield