Trust Assessment
frontend-design received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Skill operational instructions provided as untrusted input.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Skill operational instructions provided as untrusted input The entire `SKILL.md` content, which defines the operational instructions, design principles, and guidelines for the AI agent's behavior and output generation, is enclosed within the untrusted input delimiters. This violates the security principle that content within these delimiters should be treated as untrusted data, not as commands or directives for the host LLM. By placing its core instructions in an untrusted context, the skill is attempting to manipulate the LLM's behavior from a source that should be considered external and potentially malicious. The skill's operational instructions and guidelines should be defined in a trusted context, outside of the `<!---UNTRUSTED_INPUT_START...--->` and `<!---UNTRUSTED_INPUT_END...--->` delimiters. Untrusted input should only contain user-provided data or external content that the skill processes, not its own core directives. The skill definition itself should be part of the trusted prompt. | LLM | SKILL.md:2 |
Scan History
Embed Code
[](https://skillshield.io/report/c38dcfc7a33d55f4)
Powered by SkillShield