Security Audit
RightNow-AI/openfang:crates/openfang-hands/bundled/clip
github.com/RightNow-AI/openfangTrust Assessment
RightNow-AI/openfang:crates/openfang-hands/bundled/clip received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 7 findings: 2 critical, 1 high, 3 medium, and 1 low severity. Key findings include Sensitive environment variable access: $OPENAI_API_KEY, Potential Command Injection via Shell Commands, Direct File Deletion Commands.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 9/100, indicating areas for improvement.
Last analyzed on February 27, 2026 (commit 7bd01856). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings7
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via Shell Commands The skill provides numerous examples of shell commands (yt-dlp, ffmpeg, whisper, curl, rm, del) that are intended to be executed. If the LLM constructs these commands by directly interpolating untrusted user input (e.g., filenames, URLs, or other parameters) without proper sanitization or escaping, it creates a direct and high-risk path for command injection. A malicious user could craft input that executes arbitrary commands on the host system. Instruct the LLM to rigorously sanitize and escape all user-provided input before incorporating it into shell commands. Specifically, advise against direct interpolation of user input into command arguments. Consider using a dedicated tool or library for command execution that handles argument escaping automatically, or explicitly define a safe subset of characters for user input. | LLM | SKILL.md:27 | |
| CRITICAL | Direct File Deletion Commands The skill explicitly provides examples of `rm` and `del` commands for file deletion. If the LLM is prompted to use these commands with user-controlled file paths, it could lead to the deletion of arbitrary and potentially critical files on the host system. Instruct the LLM to never use `rm` or `del` with user-provided or user-derived file paths without strict validation and sandboxing. If file deletion is necessary, ensure it operates only within a designated, isolated, and temporary directory, and that the LLM confirms the target file is within expected boundaries. | LLM | SKILL.md:16 | |
| HIGH | Credential Exposure via API Key Usage The skill demonstrates the use of sensitive API keys and tokens (e.g., $GROQ_API_KEY, $OPENAI_API_KEY, $BOT_TOKEN) directly within `curl` commands. While using environment variables is a good practice, if the LLM is prompted to log these commands, or if the execution environment is not properly secured, these credentials could be exposed in logs or to unauthorized parties. The skill provides the blueprint for the LLM to use and potentially expose these secrets. Instruct the LLM to handle API keys and tokens with extreme care. Emphasize that these should never be logged, printed, or exposed in any output. If possible, use a secure secrets management system or a tool execution environment that automatically injects credentials without exposing them to the LLM or its generated commands. | LLM | SKILL.md:207 | |
| MEDIUM | Sensitive environment variable access: $OPENAI_API_KEY Access to sensitive environment variable '$OPENAI_API_KEY' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | crates/openfang-hands/bundled/clip/SKILL.md:280 | |
| MEDIUM | Data Exfiltration via File Upload APIs The skill provides examples of `curl` commands that upload local files (e.g., `@audio.wav`, `@clip_N_final.mp4`) to various third-party APIs (Groq, OpenAI, Deepgram, ElevenLabs, Telegram, WhatsApp). If a malicious user can manipulate the LLM into uploading arbitrary local files (e.g., by providing a path to a sensitive file), it could lead to the exfiltration of confidential data to external services. Instruct the LLM to strictly validate all file paths provided for upload. Ensure that files are only uploaded from designated, temporary, and sandboxed directories. Implement checks to prevent path traversal attacks and restrict uploads to expected file types and sizes. The LLM should never upload files from arbitrary user-specified paths. | LLM | SKILL.md:209 | |
| MEDIUM | Excessive Permissions Implied by Tool Usage The skill encourages the use of powerful command-line tools like `ffmpeg`, `yt-dlp`, `whisper`, `curl`, and mentions a `file_write` tool. These tools inherently possess broad filesystem and network access. If the LLM generates commands using these tools based on untrusted input, and the execution environment grants these tools broad permissions, it could lead to unintended data modification, deletion, or unauthorized network activity beyond the intended scope of the skill. Ensure the execution environment for the LLM and its generated commands operates with the principle of least privilege. Tools should be sandboxed and restricted to only the necessary filesystem paths and network access required for their intended function. The LLM should be instructed to operate within these confined boundaries and validate all file operations. | LLM | SKILL.md:22 | |
| LOW | Unpinned Dependency in Installation Instruction The skill provides an instruction to `pip install edge-tts` without specifying a version. This practice can introduce supply chain risks, as a future version of the package could contain vulnerabilities or malicious code. If the LLM were to generate installation commands based on this, it could lead to an insecure setup. When providing installation instructions, always specify exact versions for dependencies (e.g., `pip install edge-tts==X.Y.Z`) or recommend using a `requirements.txt` file with pinned versions. This ensures reproducibility and reduces the risk of introducing vulnerabilities from updated packages. | LLM | SKILL.md:270 |
Scan History
Embed Code
[](https://skillshield.io/report/46ce2347724660ac)
Powered by SkillShield