Trust Assessment
fal received a trust score of 66/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Unsanitized user input in curl arguments leads to command injection, Skill can be coerced to upload arbitrary local files, Broad Bash(curl *) and Read permissions contribute to exploitability.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unsanitized user input in curl arguments leads to command injection User-provided arguments (`$1`, `$2`, `$ARGUMENTS`) are directly interpolated into `curl` commands without proper sanitization or escaping. An attacker can inject shell metacharacters (e.g., `$(command)`, `` `command` ``) into these arguments, leading to arbitrary command execution on the host system. This vulnerability affects `search`, `schema`, `run`, `status`, `result`, and `upload` commands where user input forms part of the URL path, query parameters, or file paths. All user-provided arguments (`$1`, `$2`, `$ARGUMENTS`) used in shell commands must be properly sanitized and escaped to prevent shell metacharacter injection. For URL parameters, apply URL encoding. For file paths, ensure they are validated as safe paths and then quoted (e.g., `file=@"$1"`). Consider using a safer method for constructing shell commands, such as an array-based approach if available, or explicitly quoting variables: `curl ... "q=$1"`. | LLM | SKILL.md:47 | |
| HIGH | Skill can be coerced to upload arbitrary local files The `upload` command allows a user to specify a file path (`$1`) which is then uploaded to `fal.run/fal-ai/storage/upload`. Combined with the declared `Read` permission, this creates a data exfiltration risk. An attacker could instruct the skill to upload sensitive files from the host system (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, or other files accessible via the `Read` permission) to an external service. Restrict the `upload` command to only allow files from a specific, sandboxed directory (e.g., `~/.fal/sessions/${CLAUDE_SESSION_ID}/`). Implement strict validation on the provided file path (`$1`) to prevent directory traversal attacks and ensure it points only to allowed locations. Alternatively, remove the `upload` functionality if not strictly necessary, or implement a user confirmation step before uploading sensitive files. | LLM | SKILL.md:125 | |
| MEDIUM | Broad Bash(curl *) and Read permissions contribute to exploitability The skill declares `Bash(curl *)` and `Read` permissions. While `curl` is necessary for the skill's functionality, allowing `*` as arguments for `curl` grants it the ability to execute `curl` with arbitrary parameters. This, combined with unsanitized user input, directly enables the command injection vulnerability. The `Read` permission, when combined with the `upload` functionality, enables data exfiltration of arbitrary files. Narrowing these permissions would reduce the attack surface. Narrow down `Bash(curl *)` to specific `curl` invocations or patterns if possible, or ensure all arguments passed to `curl` are strictly validated and sanitized. Re-evaluate the necessity of broad `Read` permission; if only specific directories need to be read, restrict the scope of `Read` permission to those directories. | LLM | SKILL.md |
Scan History
Embed Code
[](https://skillshield.io/report/6fc9c39e9d7848bd)
Powered by SkillShield