Security Audit
file-organizer
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
file-organizer received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 2 high, 0 medium, and 0 low severity. Key findings include Command Injection via unsanitized directory paths, Excessive Permissions and Broad Filesystem Access, Data Exfiltration Risk via File Listing and Hashing.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via unsanitized directory paths The skill's instructions demonstrate embedding user-provided directory paths directly into shell commands (e.g., `ls -la [target_directory]`, `find [target_directory] ...`) without explicit sanitization or robust shell quoting in the provided examples. This allows a malicious user to inject arbitrary shell commands by crafting a specially formed `[target_directory]` input (e.g., `/tmp; rm -rf /`). While some `mv` commands show quotes, the `ls`, `find`, and `du` examples do not, creating a clear vulnerability. All user-provided inputs used in shell commands must be rigorously sanitized and properly quoted. For directory paths, use `shlex.quote()` in Python or similar robust quoting mechanisms in other languages, and ensure the LLM is instructed to apply this. Alternatively, use safer API calls that do not involve direct shell execution for file operations. | LLM | SKILL.md:47 | |
| HIGH | Excessive Permissions and Broad Filesystem Access The skill is designed to operate on arbitrary user-specified directories, including potentially sensitive locations like 'entire home folder'. It instructs the LLM to use powerful shell commands such as `ls`, `find`, `du`, `mkdir`, `mv`, and implicitly `rm` (for duplicates). This grants the skill broad read, write, and delete access across the filesystem, which is excessive for a file organization task without strict scope limitations or sandboxing. An attacker exploiting this could manipulate or delete critical system or user files. Implement strict access controls and sandboxing for the skill's execution environment. Limit the directories the skill can access to a predefined, non-sensitive scope (e.g., a dedicated sandbox directory). Instruct the LLM to validate user-provided paths against an allowlist or to ensure they are within the designated sandbox. Avoid operating on 'entire home folder' or root. | LLM | SKILL.md:37 | |
| HIGH | Data Exfiltration Risk via File Listing and Hashing The skill's analysis phase involves listing file paths, types, sizes, and generating MD5 hashes of files in user-specified directories. While intended for organization, this capability, especially when combined with the command injection vulnerability, could be leveraged by an attacker to gather sensitive metadata about files (e.g., existence, names, types, hashes) from arbitrary locations on the filesystem. This information could then be exfiltrated by the LLM if prompted to do so. Restrict the scope of directories the skill can analyze. Ensure that any output from file listing or hashing commands is carefully filtered and presented to the user, and not directly exposed to an attacker-controlled channel. Prioritize using safer, sandboxed file system APIs over direct shell commands for analysis. | LLM | SKILL.md:47 |
Scan History
Embed Code
[](https://skillshield.io/report/322d38ead7aace5c)
Powered by SkillShield