Trust Assessment
frontend-design received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 2 critical, 0 high, 0 medium, and 0 low severity. Key findings include Prompt Injection: Instruction to ignore tool output, Arbitrary File Write via Path Traversal in --output-dir and --project-name.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection: Instruction to ignore tool output The skill's definition, which is treated as untrusted input, contains a direct instruction for the AI agent to 'ignore' specific outputs from an internal tool. This is a form of prompt injection as it attempts to manipulate the LLM's processing of information by overriding or disregarding tool-generated data, similar to 'ignore previous instructions'. Review and remove instructions within the skill that explicitly tell the LLM to 'ignore' or 'override' information, especially from tool outputs. If filtering or modification of tool output is necessary, implement it programmatically within the tool itself or through a clearly defined, auditable post-processing step, rather than as a direct LLM instruction. | LLM | SKILL.md:30 | |
| CRITICAL | Arbitrary File Write via Path Traversal in --output-dir and --project-name The `scripts/search.py` tool allows users to specify an arbitrary `--output-dir` and `--project-name` when generating a design system with the `--persist` option. The `project_name` argument is used to construct a subdirectory path (e.g., `output_dir/project_slug/MASTER.md`). The sanitization for `project_slug` (replacing spaces with hyphens) is insufficient, as it does not prevent path traversal characters like `../` or absolute paths. An attacker could provide `project_name` as `../../../../etc` and `output_dir` as `/var/www/html` to write files to arbitrary system locations (e.g., `/etc/MASTER.md`). This can lead to data exfiltration, denial of service, or remote code execution if malicious content is written to sensitive or executable paths. Implement robust sanitization for both `--output-dir` and `--project-name` arguments to prevent path traversal. Ensure that `output_dir` is restricted to a safe, designated directory (e.g., a temporary directory or a user-specific sandbox). Additionally, sanitize `project_name` to remove or reject any path separators (`/`, `\`) or path traversal sequences (`..`). Consider using `os.path.abspath` and checking if the resolved path is within an allowed base directory. | LLM | scripts/search.py:60 |
Scan History
Embed Code
[](https://skillshield.io/report/ddcde5c9310aae62)
Powered by SkillShield