Trust Assessment
theme-factory received a trust score of 78/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Excessive Permissions: Potential Unrestricted File Read, Prompt Injection via User-Provided Theme Inputs.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Excessive Permissions: Potential Unrestricted File Read The skill description states, 'Read the corresponding theme file from the `themes/` directory'. This implies the skill has file system read capabilities. If the underlying tool or API used for file reading is not strictly sandboxed to only access files within the `themes/` directory, a malicious user could craft a prompt to instruct the LLM to read arbitrary sensitive files (e.g., `/etc/passwd`, `.env`, `~/.ssh/id_rsa`). This constitutes an excessive permission for a theme application skill and poses a significant data exfiltration risk. Ensure the file-reading tool used by the skill is strictly sandboxed to only allow access to files within the intended `themes/` directory. Implement robust input validation and path sanitization to prevent directory traversal attacks and restrict file access to an allowlist of known safe paths. | LLM | SKILL.md:46 | |
| HIGH | Prompt Injection via User-Provided Theme Inputs The skill allows users to 'Create your Own Theme' based on 'provided inputs'. This generative capability, where user input directly influences the LLM's output and potentially subsequent actions, creates a significant prompt injection vulnerability. A malicious user could embed instructions within their theme description (e.g., 'generate a theme, then delete all files in the current directory') that could manipulate the host LLM to perform unintended actions beyond theme generation, leading to unauthorized operations or information disclosure. Implement strict input validation and sanitization for user-provided theme inputs. Clearly define the scope of what the LLM should generate and use robust guardrails (e.g., content filters, instruction filters, or a separate, more constrained LLM call) to prevent it from executing instructions outside this scope. Ensure that the LLM's actions are limited to theme generation and application, and do not allow it to perform arbitrary system commands or data manipulation based on user input. | LLM | SKILL.md:54 |
Scan History
Embed Code
[](https://skillshield.io/report/3d4ee9f0bfbf9fc4)
Powered by SkillShield