Security Audit
codebase-cleanup-deps-audit
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
codebase-cleanup-deps-audit received a trust score of 70/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Untrusted content attempts to define LLM persona and instructions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted content attempts to define LLM persona and instructions The `SKILL.md` file, which is explicitly marked as untrusted input, contains direct instructions to the host LLM. This includes defining its persona ('You are a dependency security expert...') and providing specific tasks and guidelines ('Inventory direct and transitive dependencies.', 'Do not publish sensitive vulnerability details...'). This constitutes a prompt injection attempt, as the untrusted content is trying to manipulate the LLM's behavior, role, and instructions, violating the principle that content within untrusted delimiters should be treated as data, not commands. Move all instructions, persona definitions, safety guidelines, and output format specifications out of the untrusted content delimiters and into the trusted skill definition or system prompt. The untrusted content should only contain data or user input, not instructions for the LLM. | LLM | SKILL.md:5 |
Scan History
Embed Code
[](https://skillshield.io/report/32a184b077d32f64)
Powered by SkillShield