Trust Assessment
fal-ai received a trust score of 28/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 6 findings: 1 critical, 2 high, 2 medium, and 1 low severity. Key findings include Arbitrary command execution, Dangerous call: subprocess.run(), Suspicious import: requests.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/sxela/falai/scripts/fal_client.py:330 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'video_to_data_uri'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | skills/sxela/falai/scripts/fal_client.py:330 | |
| HIGH | Arbitrary Local File Read and Exfiltration The skill's `image_to_data_uri` and `video_to_data_uri` functions allow reading arbitrary local files specified by a user-controlled path. The content of these files is then base64 encoded and included in the `input_data` sent to the fal.ai API. This creates a direct path for an attacker to instruct the agent (via prompt injection) to read sensitive files (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, `~/.aws/credentials`) from the agent's host system and exfiltrate their contents to the fal.ai service. Implement strict validation on file paths to ensure they are within an allowed, sandboxed directory. Alternatively, require explicit user confirmation for reading files outside a designated skill-specific directory. Consider adding a warning to the user about this capability. | LLM | scripts/fal_client.py:300 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/sxela/falai/scripts/fal_client.py:21 | |
| MEDIUM | Potential Command Injection via ffprobe Argument The `_get_video_metadata` function executes the `ffprobe` command using `subprocess.run`. A user-controlled `file_path` is passed directly as an argument to `ffprobe`. While `subprocess.run` with a list of arguments is generally safer than `shell=True`, a maliciously crafted `file_path` could potentially exploit vulnerabilities in `ffprobe`'s argument parsing or lead to unexpected behavior if `ffprobe` interprets certain strings as commands or options (e.g., if `file_path` could be crafted to include `ffprobe` options that lead to arbitrary command execution or information disclosure). Implement stricter validation or sanitization of the `file_path` before passing it to `ffprobe`. Ensure `ffprobe` is run with minimal privileges and consider using a dedicated library for video metadata extraction if available, rather than direct subprocess calls. | LLM | scripts/fal_client.py:270 | |
| LOW | Access to API Keys from Multiple Sources The `get_api_key` function attempts to retrieve the FAL API key from environment variables (`FAL_KEY`), a global OpenClaw configuration file (`~/.openclaw/openclaw.json`), and a skill-specific `TOOLS.md` file (`~/.openclaw/workspace/TOOLS.md`). While this is the intended mechanism for the skill to obtain credentials, it means the skill has read access to these potentially sensitive locations. If the skill's execution environment were compromised, these credentials could be exposed. This is an inherent risk when skills require API keys. Ensure the agent's execution environment is properly sandboxed and secured. Users should be advised to follow best practices for storing API keys (e.g., using secrets management systems) and to grant skills only the minimum necessary permissions. | LLM | scripts/fal_client.py:100 |
Scan History
Embed Code
[](https://skillshield.io/report/84921739e22c9ae2)
Powered by SkillShield