Security Audit
web-design-guidelines
github.com/guanyang/antigravity-skillsTrust Assessment
web-design-guidelines received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 2 critical, 2 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via Untrusted Skill Description, Supply Chain Risk: Unpinned External Instruction Source, Potential Data Exfiltration via Dynamic Instructions and File Access.
The analysis covered 4 layers: dependency_graph, manifest_analysis, llm_behavioral_safety, static_code_analysis. The llm_behavioral_safety layer scored lowest at 10/100, indicating areas for improvement.
Last analyzed on February 15, 2026 (commit 3e75fabd). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via Untrusted Skill Description The entire `SKILL.md` content is marked as untrusted input, yet it contains direct instructions for the host LLM on how to operate. This includes commands like 'Fetch the latest guidelines from the source URL below', 'Read the specified files', and 'Use WebFetch to retrieve the latest rules'. This allows an attacker to inject arbitrary instructions into the LLM's execution flow by modifying the skill's description. Skill descriptions and operational instructions for the LLM should be clearly separated. Untrusted content should never contain direct instructions for the LLM. Instead, the LLM's behavior should be defined by trusted, immutable code or configuration, and untrusted content should only serve as input data. | Unknown | SKILL.md:1 | |
| CRITICAL | Supply Chain Risk: Unpinned External Instruction Source The skill explicitly instructs the LLM to fetch its core 'rules and output format instructions' from an unpinned external URL: `https://raw.githubusercontent.com/vercel-labs/web-interface-guidelines/main/command.md`. This creates a critical supply chain vulnerability. If the content at this URL is compromised, an attacker can inject arbitrary instructions into the LLM's execution, leading to data exfiltration, command injection (if tools are available), or other malicious activities. The skill's behavior is entirely dependent on an external, mutable resource. Avoid fetching operational instructions or rules from unpinned external URLs. If external data is necessary, it should be fetched from a trusted, version-controlled source with integrity checks (e.g., hash verification). Ideally, all operational logic should be self-contained within the trusted skill package. | Unknown | SKILL.md:15 | |
| HIGH | Potential Data Exfiltration via Dynamic Instructions and File Access The skill is instructed to 'Read the specified files' and then apply rules from externally fetched guidelines. The statement 'The fetched content contains all the rules and output format instructions' implies that the external content dictates how the LLM processes and outputs data. An attacker who compromises the external guideline source could instruct the LLM to read sensitive local files (e.g., `/etc/passwd`, `.env` files, SSH keys) and then include their content in the 'output findings' or use other available tools (like WebFetch) to exfiltrate this data to an arbitrary external server. Restrict the LLM's ability to read arbitrary files, especially when processing instructions from untrusted or dynamically loaded sources. Implement strict data sanitization and validation on any output generated from untrusted inputs. Ensure that the LLM cannot be instructed to send arbitrary data to external URLs without explicit user consent and strict domain whitelisting. | Unknown | SKILL.md:18 | |
| HIGH | Excessive Permissions: Broad Filesystem Access with Dynamic Control The skill explicitly instructs the LLM to 'Read the specified files'. When combined with the critical prompt injection and supply chain risks, this grants the skill broad filesystem access that can be dynamically controlled by an attacker who compromises the external guideline source. This allows for reading sensitive files beyond the intended scope of UI code review. Implement strict access controls for file operations. Limit file reading to specific directories or file types relevant to the skill's intended function. Avoid allowing dynamic instructions from untrusted sources to dictate file access patterns. If file access is necessary, ensure it's performed by a trusted, sandboxed component with minimal privileges. | Unknown | SKILL.md:9 |
Scan History
Embed Code
[](https://skillshield.io/report/033ab7ff555c6fdf)
Powered by SkillShield