Security Audit
startup-metrics-framework
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
startup-metrics-framework received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via Untrusted Instructions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via Untrusted Instructions The untrusted skill content contains explicit instructions intended for the host LLM, such as 'Clarify goals...', 'Apply relevant best practices...', and 'open `resources/implementation-playbook.md`'. These directives, when originating from untrusted input, constitute a prompt injection attempt. An attacker could modify these instructions to manipulate the LLM's behavior, leading to unintended actions, data exposure, or generation of malicious content. Move all operational instructions for the LLM out of the untrusted content block. If these are user-facing instructions, rephrase them to be descriptive rather than imperative commands for the LLM. If they are truly LLM instructions, they must be part of the trusted skill definition (e.g., manifest or trusted code files) and not embedded in user-editable markdown. | LLM | SKILL.md:20 |
Scan History
Embed Code
[](https://skillshield.io/report/30bac8626dd71aee)
Powered by SkillShield