Trust Assessment
file-organizer received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via Unsanitized User Input in Shell Commands, Excessive Data Exposure / Potential Data Exfiltration of File System Details, Broad Filesystem Access with Insufficient Scope Limitation.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 16, 2026 (commit ccf6204f). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via Unsanitized User Input in Shell Commands The skill describes executing various shell commands (`ls`, `find`, `du`, `md5`, `mkdir`, `mv`) using user-provided directory and file paths (e.g., `[target_directory]`, `[directory]`, `path/to/new/folders`). If the LLM directly substitutes user input into these commands without proper sanitization or escaping, a malicious user could inject arbitrary shell commands. For example, providing `'; rm -rf /'` as a directory could lead to severe system compromise. The skill does not mention any input validation or sanitization mechanisms. Implement robust input sanitization and validation for all user-provided paths and filenames before they are used in shell commands. Use safe command execution methods (e.g., `subprocess.run` with `shell=False` and arguments passed as a list, or `shlex.quote` for individual arguments) to prevent shell injection. Avoid direct concatenation of user input into shell strings. | LLM | SKILL.md:70 | |
| HIGH | Excessive Data Exposure / Potential Data Exfiltration of File System Details The skill explicitly instructs the LLM to gather and present detailed file system information to the user, including file paths, sizes, modification dates, file types, and directory structures. While intended for legitimate organization, this level of detail could be exploited for data exfiltration if the agent is compromised or if a malicious user is interacting with the skill. For instance, the skill instructs to 'Summarize findings' including file type breakdown and size distribution, and to 'Show all file paths' and 'Display sizes and modification dates' for duplicates. Implement stricter access controls and data redaction policies. Only expose file system information that is strictly necessary for the user's immediate request. For sensitive directories or files, prompt for explicit user confirmation before revealing details. Consider sandboxing the execution environment to limit the scope of accessible files and directories. | LLM | SKILL.md:82 | |
| MEDIUM | Broad Filesystem Access with Insufficient Scope Limitation The skill implies broad read/write/execute access across significant portions of the user's home directory (e.g., 'Downloads folder', 'Documents folder', 'entire home folder', 'project directories'). While necessary for its function, the skill does not explicitly define or enforce scope limitations beyond asking clarifying questions to the user. This broad access, combined with potential command injection vulnerabilities, increases the blast radius of any exploit. Implement explicit scope limitations for file system operations. Where possible, restrict the skill's access to only the directories explicitly approved by the user for a given task. Consider using a sandboxed environment or a file system abstraction layer that enforces granular permissions and prevents access to sensitive system directories. | LLM | SKILL.md:59 |
Scan History
Embed Code
[](https://skillshield.io/report/67e1506f12ce219d)
Powered by SkillShield