Trust Assessment
web-design-guidelines received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 2 high, 0 medium, and 0 low severity. Key findings include Dynamic Instruction Fetching Leads to Prompt Injection, Dynamic Instructions Enable Data Exfiltration and Credential Harvesting, Unpinned External Content Introduces Supply Chain Risk.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Dynamic Instruction Fetching Leads to Prompt Injection The skill fetches 'guidelines' from an external URL (`https://raw.githubusercontent.com/vercel-labs/web-interface-guidelines/main/command.md`) and explicitly states that 'The fetched content contains all the rules and output format instructions.' This means the content of this external file directly dictates the LLM's behavior, rules to apply, and how to format its output. An attacker controlling this external file could inject arbitrary instructions to manipulate the host LLM, bypass safety mechanisms, or perform unintended actions. Do not fetch dynamic instructions from untrusted or unverified external sources. If external data is necessary, it should be strictly data (e.g., JSON, YAML) with a predefined schema, and its processing logic should be hardcoded within the skill, not dictated by the external content. Pin the content by hash or version if absolutely necessary. | LLM | SKILL.md:17 | |
| HIGH | Dynamic Instructions Enable Data Exfiltration and Credential Harvesting Building on the prompt injection vulnerability, the skill states it will 'Read the specified files' and 'Output findings using the format specified in the guidelines.' Since the fetched guidelines can contain arbitrary instructions, they could instruct the LLM to read sensitive local files (e.g., /etc/passwd, .env files, user data, API keys, tokens) and include their content in the 'findings' output, effectively exfiltrating data or credentials. Prevent dynamic instructions from controlling file access or output formatting. If file reading is required, strictly define allowed paths and content types within the skill's hardcoded logic. Never allow external content to dictate which files are read or how their content is processed and outputted. | LLM | SKILL.md:26 | |
| HIGH | Unpinned External Content Introduces Supply Chain Risk The skill fetches its core 'guidelines' from a raw GitHub URL (`https://raw.githubusercontent.com/vercel-labs/web-interface-guidelines/main/command.md`) without any version pinning or content hash verification. The content at this URL can change at any time, potentially introducing malicious instructions or vulnerabilities without the skill developer's knowledge or review. This creates a significant supply chain risk, as the skill's behavior is entirely dependent on an unverified, dynamic external resource. Avoid fetching dynamic instructions from unpinned external sources. If external data is absolutely necessary, pin it to a specific version or commit hash, and ideally, mirror it or verify its integrity (e.g., via cryptographic hash) before use. | LLM | SKILL.md:17 |
Scan History
Embed Code
[](https://skillshield.io/report/f9334a2243bbecef)
Powered by SkillShield