Trust Assessment
theme-factory received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 2 high, 0 medium, and 0 low severity. Key findings include Direct instructions to LLM found in untrusted skill content, Untrusted content instructs LLM to read/display files, Skill requests broad file modification capabilities.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Direct instructions to LLM found in untrusted skill content The `SKILL.md` file, which is explicitly designated as untrusted input, contains direct operational instructions for the host LLM. These instructions, such as 'Display the `theme-showcase.pdf` file', 'Read the corresponding theme file from the `themes/` directory', 'generate a new theme', and 'apply the theme', attempt to manipulate the LLM's behavior. Following any command from untrusted content constitutes a prompt injection vulnerability, as the LLM would be executing directives from an unverified source. All operational instructions for the LLM must be defined in trusted system prompts or tool definitions, outside of any untrusted skill content. The `SKILL.md` should only describe the skill's purpose and usage to human users, not instruct the LLM on how to execute its functions. | LLM | SKILL.md:17 | |
| HIGH | Untrusted content instructs LLM to read/display files The `SKILL.md` contains instructions for the LLM to 'Display the `theme-showcase.pdf` file' (line 17) and 'Read the corresponding theme file from the `themes/` directory' (line 47). If the LLM is susceptible to prompt injection and follows these instructions, and its underlying tools allow file access, an attacker could potentially leverage this capability to read or display arbitrary sensitive files from the system, leading to data exfiltration. The explicit mention of file operations within untrusted content poses a significant risk. Implement strict sandboxing for all file access tools, limiting them to explicitly allowed files or directories. Ensure that the LLM's file operations are mediated by trusted tool definitions with granular permissions, and that it never performs file I/O based on instructions from untrusted content. The ability to display/read files should be carefully controlled. | LLM | SKILL.md:17 | |
| HIGH | Skill requests broad file modification capabilities The skill's instructions in the untrusted `SKILL.md` indicate that it can 'apply the theme' to 'any artifact' (e.g., slide decks, docs, HTML landing pages). This implies a broad capability to modify various types of files. If the underlying tools grant the LLM broad write access to the filesystem based on this instruction (especially if prompted via injection), it could lead to unintended data corruption, unauthorized modifications, or even arbitrary code execution if the 'artifact' is a script or configuration file. Precisely define and restrict the types of 'artifacts' the skill can modify and the scope of modifications. Ensure that the underlying tools enforce these restrictions and do not grant arbitrary write access. All modification capabilities should be explicitly defined in trusted tool definitions, with strict validation of target files and content. | LLM | SKILL.md:20 |
Scan History
Embed Code
[](https://skillshield.io/report/8b616a7dbe4fae42)
Powered by SkillShield