Security Audit
RightNow-AI/openfang:crates/openfang-hands/bundled/twitter
github.com/RightNow-AI/openfangTrust Assessment
RightNow-AI/openfang:crates/openfang-hands/bundled/twitter received a trust score of 67/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Covert behavior / concealment directives, Potential for Credential Harvesting/Data Exfiltration via Prompt Injection, Indirect Command Injection Risk via Shell Command Examples.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 27, 2026 (commit 7bd01856). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Covert behavior / concealment directives Directive to hide behavior from user Remove hidden instructions, zero-width characters, and bidirectional overrides. Skill instructions should be fully visible and transparent to users. | Manifest | crates/openfang-hands/bundled/twitter/SKILL.md:304 | |
| HIGH | Potential for Credential Harvesting/Data Exfiltration via Prompt Injection The skill explicitly references the `TWITTER_BEARER_TOKEN` environment variable and demonstrates its use in `curl` commands. An attacker could craft a prompt injection to trick the LLM into revealing the value of this token or using it in a malicious `curl` command to exfiltrate data to an attacker-controlled server. Even though the skill is `prompt_only`, the LLM's knowledge of this variable and its usage pattern creates a significant risk if the LLM is not sufficiently sandboxed against such prompts. 1. Avoid hardcoding sensitive environment variable names or usage patterns in `prompt_only` skills. If such information is necessary, consider abstracting it behind a tool call that handles the sensitive data securely, rather than exposing it directly to the LLM's reasoning context. 2. Implement robust prompt injection defenses to prevent the LLM from revealing sensitive information or constructing malicious commands. 3. Ensure the LLM's execution environment is strictly sandboxed and cannot execute arbitrary shell commands generated by the LLM, especially those involving sensitive tokens. | LLM | SKILL.md:10 | |
| MEDIUM | Indirect Command Injection Risk via Shell Command Examples The skill provides several `curl` commands as examples for interacting with the Twitter API. While the `runtime: prompt_only` manifest entry indicates that the skill itself does not execute code, the presence of these shell commands in the LLM's context introduces an indirect command injection risk. A sophisticated prompt injection could instruct the LLM to generate or suggest the execution of these commands, potentially with modified parameters (e.g., changing the target URL for data exfiltration, or altering the tweet content to include malicious links), if the LLM's output is subsequently processed by a system capable of executing shell commands without proper validation. 1. Prefer abstracting API interactions behind dedicated tools/functions rather than providing raw shell commands in `prompt_only` skills. This allows the tool to handle command construction and execution securely. 2. If shell command examples are strictly necessary, ensure they are clearly marked as illustrative and include strong warnings against direct execution without validation. 3. Implement strict output sanitization and validation for any LLM-generated content that might be passed to a shell or other execution environment. | LLM | SKILL.md:15 |
Scan History
Embed Code
[](https://skillshield.io/report/df92fb2a4b7326f5)
Powered by SkillShield