Security Audit
reverse-engineer
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
reverse-engineer received a trust score of 82/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Untrusted content attempts to direct LLM to open a file, Skill implies need for file system read access.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Untrusted content attempts to direct LLM to open a file The skill's instructions, located within the untrusted input block, explicitly direct the host LLM to 'open `resources/implementation-playbook.md`' under certain conditions. This is a direct attempt to manipulate the LLM's actions and tool usage based on untrusted content, which constitutes a prompt injection vulnerability. An attacker could potentially modify this path to access other files if the LLM's file reading tool is not properly sandboxed. Remove direct instructions for the LLM to perform actions (like opening files) from untrusted skill content. The LLM should decide autonomously whether to use its tools based on user requests and its own reasoning, not based on directives embedded within the skill's definition. If file access is genuinely needed, it should be explicitly requested by the user or handled through a secure, sandboxed tool call. | LLM | SKILL.md:40 | |
| MEDIUM | Skill implies need for file system read access The instruction 'open `resources/implementation-playbook.md`' within the untrusted skill content implies that the host LLM has a tool capable of reading files from the file system. If this file reading tool is not strictly confined to a safe directory or requires explicit user confirmation for each access, it could lead to an excessive permission vulnerability, allowing an attacker to exfiltrate sensitive data by manipulating the file path. Ensure that any file reading tools available to the LLM are strictly sandboxed, require explicit user consent for each access, and are limited to specific, non-sensitive directories. Avoid embedding direct file access instructions within untrusted skill definitions. | LLM | SKILL.md:40 |
Scan History
Embed Code
[](https://skillshield.io/report/03a423dda0ab7382)
Powered by SkillShield