Security Audit
snyk/agent-scan:tests/skills/canvas-design
github.com/snyk/agent-scanTrust Assessment
snyk/agent-scan:tests/skills/canvas-design received a trust score of 67/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 0 high, 1 medium, and 0 low severity. Key findings include Skill attempts to inject user input into LLM context, Skill instructs LLM to search local directory.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on March 1, 2026 (commit 30a672c5). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Skill attempts to inject user input into LLM context The skill explicitly instructs the LLM that 'The user ALREADY said' a specific phrase. This is a direct prompt injection attempt, as it tries to manipulate the LLM's understanding of the current conversation state by injecting a pre-defined user statement. This can override actual user intent and lead to unexpected or malicious behavior. Remove any instructions that attempt to simulate or inject user input into the LLM's context. The LLM should only process actual user input and not be told what the user 'already said'. | LLM | SKILL.md:121 | |
| MEDIUM | Skill instructs LLM to search local directory The skill contains an instruction for the LLM to 'Search the `./canvas-fonts` directory.' If the LLM has access to file system tools (e.g., `list_directory`, `read_file`), this instruction could lead to the LLM listing or reading files from the specified directory. While the intended directory might be benign, the underlying capability to 'search' directories could be exploited by a malicious user prompt to access sensitive files or directories outside the intended scope, leading to data exfiltration or information disclosure. Avoid instructing the LLM to directly 'search' or 'list' local directories. If font selection is required, provide a pre-defined list of available fonts or use a sandboxed mechanism that does not expose arbitrary file system access to the LLM. | LLM | SKILL.md:99 |
Scan History
Embed Code
[](https://skillshield.io/report/6176506ab71c482f)
Powered by SkillShield