Trust Assessment
canvas-design received a trust score of 78/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 3 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Arbitrary Font Download Capability, Implied File System Directory Access, Skill-driven Prompt Injection for Behavior Control.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Arbitrary Font Download Capability The skill instructs the LLM to 'Download and use whatever fonts are needed'. If the LLM has a tool that allows downloading files from arbitrary URLs, this presents a significant supply chain risk. An attacker could craft a prompt that causes the LLM to download malicious fonts or other files from untrusted sources, potentially leading to malware execution, data exfiltration, or other system compromises if the downloaded content is processed or executed. There are no specified trusted sources or validation mechanisms for these downloads. Restrict font downloads to a predefined list of trusted URLs or a secure, sandboxed font repository. Implement content validation (e.g., checksums, virus scanning) for downloaded files. Avoid allowing arbitrary URL downloads. | Static | SKILL.md:106 | |
| MEDIUM | Implied File System Directory Access The instruction 'Search the `./canvas-fonts` directory' implies that the LLM has access to a tool capable of listing or reading files within a specified directory. While the path provided is relative and specific, if the underlying tool allows for path traversal (e.g., `../`, absolute paths) or reading arbitrary files, this could be exploited for data exfiltration. An attacker could potentially manipulate the prompt to read sensitive files outside the intended directory. Ensure that any file system access tools are strictly sandboxed and enforce path validation to prevent traversal outside of designated, safe directories. Limit the LLM's ability to output raw file content directly to the user. | Static | SKILL.md:104 | |
| INFO | Skill-driven Prompt Injection for Behavior Control The skill explicitly injects strong directives into the LLM's context, such as 'The user ALREADY said "It isn't perfect enough. It must be pristine, a masterpiece if craftsmanship, as if it were about to be displayed in a museum."' and repeated emphasis on 'meticulously crafted' and 'expert craftsmanship.' While these instructions are intended to guide the LLM's creative output as per the skill's design, they demonstrate the skill's ability to inject specific, forceful instructions that could potentially override or strongly influence the LLM's internal reasoning or safety mechanisms if the content were malicious. This is a common technique for controlling LLM behavior within a skill. Review all skill-provided instructions to ensure they do not inadvertently or maliciously override core LLM safety policies or user intent. Implement strict sanitization and validation of all skill-provided directives. | LLM | SKILL.md:118 |
Scan History
Embed Code
[](https://skillshield.io/report/7a45017f77fb7fff)
Powered by SkillShield