Trust Assessment
upload-to-catbox received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Arbitrary local file exfiltration to public service, Command injection vulnerability via unsanitized file path in curl command, Skill requires excessive file system read permissions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary local file exfiltration to public service The skill is designed to upload local image files to `catbox.moe`, a public file hosting service. However, the mechanism allows for any local file path to be specified (e.g., by the user or inferred by the LLM). If an attacker or a manipulated LLM provides a path to a sensitive file (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, configuration files), the content of that file would be uploaded to `catbox.moe` and become publicly accessible, leading to severe data exfiltration. The skill does not implement any validation or restriction on the file paths or types that can be uploaded. Implement strict validation and sanitization of file paths provided by the user or inferred by the LLM. Restrict file access to a designated, isolated temporary directory for uploads, or only allow specific, pre-approved file types and locations. Ensure the agent's execution environment enforces file system access controls. Consider using a private or authenticated file storage service if sensitive data might be involved, or at least warn the user explicitly about public exposure. | LLM | SKILL.md:39 | |
| HIGH | Command injection vulnerability via unsanitized file path in curl command The skill constructs a `curl` command using a file path that originates from user input or LLM inference. If this file path is not properly sanitized or escaped before being interpolated into the shell command, an attacker could inject arbitrary shell commands. For example, providing a path like `'/path/to/image.png; rm -rf /'` could lead to the execution of `rm -rf /` on the host system. The examples provided show direct interpolation of the path, indicating a potential vulnerability. Ensure that any user-provided or LLM-inferred file paths are rigorously sanitized and properly quoted/escaped before being used in shell commands. Use a robust command execution library that handles argument separation safely, or explicitly escape all special shell characters in the file path. | LLM | SKILL.md:39 | |
| MEDIUM | Skill requires excessive file system read permissions The skill's design implies the ability to read arbitrary local files from the host system, as it can be triggered by various local file path patterns including absolute paths, user home directories (`/Users/`, `/home/`), and project directories (`.cursor/projects/`). This grants the skill overly broad read access to the file system, increasing the attack surface for data exfiltration or other malicious activities if the skill is compromised or misused. Restrict the skill's file system access to the absolute minimum necessary. Ideally, confine file operations to a sandboxed environment or a specific, temporary upload directory. Implement explicit allow-lists for file paths or directories that the skill is permitted to access, rather than relying on broad pattern matching. | LLM | SKILL.md:22 |
Scan History
Embed Code
[](https://skillshield.io/report/b13007dee0d69b73)
Powered by SkillShield