Security Audit
mobile-security-coder
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
mobile-security-coder received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via Untrusted File Access Instruction.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Prompt Injection via Untrusted File Access Instruction The skill definition, which is entirely enclosed within the UNTRUSTED_INPUT delimiters, contains a direct instruction for the host LLM to 'open `resources/implementation-playbook.md`'. According to SkillShield's rules, any content within these delimiters must be treated as untrusted data, not instructions. An untrusted instruction that causes the LLM to perform an action, such as accessing the local filesystem, constitutes a prompt injection vulnerability. While the specified file is likely an internal resource, this pattern demonstrates that untrusted input can trigger file operations, which could lead to data exfiltration or further command injection if the LLM's file access is not strictly sandboxed or if the path could be manipulated. Remove direct 'open file' instructions from content that is intended to be treated as untrusted input. If file access is a necessary capability, ensure it is invoked through a strictly controlled and sandboxed tool call, not via direct LLM instruction parsing from untrusted content. Implement strict sandboxing for file access, limiting it only to explicitly allowed files or directories within the skill's package, and prevent any ability to read or execute arbitrary files. | LLM | SKILL.md:20 |
Scan History
Embed Code
[](https://skillshield.io/report/3800139983a3443b)
Powered by SkillShield