Trust Assessment
Fashion Studio received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Prompt Injection via User Input to Subagent, Subagent has File System Read Access to Local Paths, User Instruction for 'curl' upload could be misused.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via User Input to Subagent User-provided input is directly embedded into prompts for a 'generalPurpose' subagent without apparent sanitization or validation. A malicious user could inject instructions into fields like '用户模特信息' or '服装描述' to manipulate the subagent's behavior. Given that the subagent is explicitly instructed to read local files (e.g., template files), a successful prompt injection could instruct the subagent to read and potentially exfiltrate arbitrary local files (e.g., '/etc/passwd', environment variables, or other sensitive data accessible to the subagent's execution environment). While 'readonly: true' might prevent write operations, it typically does not restrict read access, making this a significant data exfiltration risk. Implement robust input sanitization and validation for all user-provided text before embedding it into subagent prompts. Consider using a more constrained subagent type or a sandboxed environment for processing untrusted input. If the subagent must read files, ensure it can only access an allow-list of paths, and that these paths cannot be manipulated by prompt injection. Explicitly define and enforce the capabilities of 'readonly: true' for subagents. | LLM | SKILL.md:147 | |
| HIGH | Subagent has File System Read Access to Local Paths The 'generalPurpose' subagent is explicitly instructed to read files from specific local paths (e.g., '/Users/x013/Desktop/vault/.cursor/skills/服装详情页/references/model-casting-prompt.md'). This indicates that the subagent's execution environment has read access to the local file system. While the paths are hardcoded in the skill's prompt construction, this capability, when combined with the prompt injection vulnerability (SS-LLM-001), significantly increases the risk of data exfiltration if the path could be manipulated by a malicious prompt. Even without direct path manipulation, the broad read access for a general-purpose LLM is an excessive permission. Restrict the subagent's file system access to only the absolute minimum required files. Implement an allow-list for file paths that the subagent can access, and ensure these paths are not user-modifiable. Consider using a virtualized or containerized environment for subagents to limit their access to the host file system. | LLM | SKILL.md:154 | |
| MEDIUM | User Instruction for 'curl' upload could be misused The skill instructs the user to upload local images using a 'curl' command: `curl -s -F "reqtype=fileupload" -F "fileToUpload=@/path/to/image.png" https://catbox.moe/user/api.php`. While this is presented as an instruction for the user to perform, if the skill's underlying execution environment were to allow arbitrary shell command execution based on user input (even if not explicitly shown in the provided context), and the `/path/to/image.png` was user-controlled, it could lead to arbitrary file exfiltration. This pattern represents a potential command injection or data exfiltration vector if the skill's execution model is not strictly sandboxed. Clarify that the skill does not execute arbitrary shell commands provided by the user. If the skill *does* execute such commands, ensure strict sanitization and validation of file paths and arguments, or use a dedicated, sandboxed file upload mechanism instead of raw 'curl' commands. | LLM | SKILL.md:30 |
Scan History
Embed Code
[](https://skillshield.io/report/d8326b984d42985b)
Powered by SkillShield