Trust Assessment
file-organizer received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via Unsanitized User Input in Shell Commands, Broad Filesystem Access with Potential for Destructive Actions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via Unsanitized User Input in Shell Commands The skill's instructions explicitly guide the LLM to construct and execute shell commands using placeholders (e.g., `[target_directory]`, `[directory]`, `path/to/new/folders`, `old/path/file.pdf`, `new/path/file.pdf`). If these placeholders are populated directly from untrusted user input without proper sanitization or quoting, a malicious user could inject arbitrary commands. For example, providing `my_folder; rm -rf /` as a target directory could lead to critical data loss or system compromise. The skill documentation does not specify any sanitization steps for user-provided paths or filenames before their inclusion in shell commands. Instruct the LLM to rigorously sanitize all user-provided input (e.g., directory paths, filenames) before incorporating them into shell commands. The preferred method is to use a safe execution mechanism that avoids `shell=True` (e.g., `subprocess.run` with `shell=False` and passing arguments as a list). If `shell=True` is unavoidable, ensure all user-controlled variables are properly quoted and escaped (e.g., using `shlex.quote` in Python) to prevent command injection. | LLM | SKILL.md:64 | |
| HIGH | Broad Filesystem Access with Potential for Destructive Actions The skill is designed to perform extensive file operations, including listing, moving, renaming, and deleting files across various user directories (e.g., `~`, `Downloads`, `Documents`, `Projects`). While this broad access is inherent to the skill's purpose, it significantly amplifies the impact of any command injection vulnerability. The instructions mention 'Always confirm before deleting anything' and 'Log all moves for potential undo', which are good practices, but these safeguards can be bypassed if command injection occurs before the confirmation prompt or if the logging mechanism is compromised. The ability to `rm -rf /` (as demonstrated in the command injection example) highlights the critical risk. Beyond preventing command injection, consider implementing a 'dry run' mode for all destructive operations (move, delete) where the LLM first presents the exact commands it *would* execute for user review, even after initial confirmation. Additionally, explore sandboxing or limiting the execution environment's filesystem access to only necessary directories if the underlying platform supports such controls. Ensure robust logging and audit trails for all file modifications. | LLM | SKILL.md:20 |
Scan History
Embed Code
[](https://skillshield.io/report/430ed9c01189e362)
Powered by SkillShield