Security Audit
accessibility-compliance-accessibility-audit
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
accessibility-compliance-accessibility-audit received a trust score of 43/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Potential Prompt Injection via $ARGUMENTS Placeholder.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Prompt Injection via $ARGUMENTS Placeholder The skill template includes a `$ARGUMENTS` placeholder directly within the prompt instructions. If user-provided input for `$ARGUMENTS` is not properly sanitized or escaped by the orchestrator before being inserted into the LLM's prompt, a malicious user could inject instructions to manipulate the LLM's behavior, bypass safety mechanisms, or extract sensitive information. This design exposes a direct vector for prompt injection. Ensure the orchestrator strictly sanitizes and escapes any user-provided input for the `$ARGUMENTS` placeholder before it is passed to the LLM. Consider using structured input methods (e.g., JSON schema validation) instead of raw text insertion for arguments to prevent arbitrary instruction injection. If raw text is necessary, implement robust input validation and escaping mechanisms. | LLM | SKILL.md:20 |
Scan History
Embed Code
[](https://skillshield.io/report/0e4d79e3a9fca0be)
Powered by SkillShield