Security Audit
referral-program
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
referral-program received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Untrusted content attempts to instruct the LLM.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted content attempts to instruct the LLM The skill's primary content, which is explicitly marked as untrusted input, contains direct instructions to the host LLM. This includes defining the LLM's role and goals ('You are an expert...', 'Your goal is to help...'), and dictating interaction patterns ('Gather this context (ask if not provided):', 'If you need more context:'). It also suggests related skills for the LLM to use. These instructions attempt to manipulate the LLM's behavior from an untrusted source, which is a form of prompt injection. The untrusted content should not contain direct instructions or role-setting for the host LLM. Any desired behavior or context gathering should be defined in the trusted manifest or system prompt, not within the user-provided skill content. The skill content should be purely informational or data-driven. | LLM | SKILL.md:3 |
Scan History
Embed Code
[](https://skillshield.io/report/0401a232cbfaef4d)
Powered by SkillShield