Trust Assessment
creator-alpha-feed received a trust score of 44/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 7 findings: 2 critical, 1 high, 4 medium, and 0 low severity. Key findings include Sensitive environment variable access: $HOME, Command Injection via unsanitized date argument, Prompt Injection via untrusted content in AI analysis task.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 18/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings7
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via untrusted content in AI analysis task The `analyze.sh` script generates `analysis-task.md` which serves as instructions for an LLM. This task file directly embeds content (e.g., `.title`, `.url`) from `$FILTERED_DIR/extracted-items.json`, which is derived from untrusted external sources (Hacker News, Reddit, TechCrunch). If a malicious actor can control the title or URL of an item collected from these sources, they could inject prompt injection instructions into the `analysis-task.md` file, manipulating the LLM's behavior when it processes this task. When generating LLM instructions from untrusted data, always sanitize or escape the untrusted content to prevent it from being interpreted as instructions. Consider using a dedicated data field for untrusted content that is clearly separated from the LLM's core instructions, or implement strict input validation and filtering for any content that will be embedded into prompts. | LLM | scripts/analyze.sh:99 | |
| CRITICAL | Prompt Injection via untrusted content in AI analysis task (pipeline) The `daily-ai-pipeline.sh` script generates `ai-analysis-task.md` which explicitly instructs the LLM to 'analyze $RAW_MD and $RAW_JSON'. These `$RAW_MD` and `$RAW_JSON` files are populated by `collect-v4.sh` with content from untrusted external sources. If a malicious actor can control content from these external sources (e.g., article titles), they could inject prompt injection instructions into the raw data files, which would then be fed directly to the LLM as part of its analysis task, manipulating its behavior. Ensure that any untrusted content provided to the LLM for analysis is clearly demarcated as data, not instructions. Implement robust sanitization or escaping of untrusted inputs before they are embedded into LLM prompts or instruction files. The LLM should be instructed to treat content from `$RAW_MD` and `$RAW_JSON` strictly as data to be analyzed, not as commands or instructions. | LLM | scripts/daily-ai-pipeline.sh:104 | |
| HIGH | Command Injection via unsanitized date argument The `cleanup.sh` script uses the `$KEEP_DAYS` variable, derived directly from the first command-line argument (`$1`), within a `date` command. If an attacker can control `$1`, they could inject shell metacharacters, leading to arbitrary command execution. Specifically, `date -d "$KEEP_DAYS days ago"` or `date -v-${KEEP_DAYS}d` could be vulnerable depending on the `date` implementation (e.g., BSD `date` on macOS with `-v`). Sanitize or validate the `$KEEP_DAYS` input to ensure it is a safe integer. Alternatively, use a more robust method for date calculation that is not susceptible to shell injection, or ensure the `date` command is invoked in a way that prevents argument parsing as shell commands (e.g., by using `printf %q` for arguments if available and appropriate). | LLM | scripts/cleanup.sh:18 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/rotbit/creator-alpha-feed/scripts/auto-daily-task.sh:11 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/rotbit/creator-alpha-feed/scripts/collect-twitter.sh:8 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/rotbit/creator-alpha-feed/scripts/twitter-browser-tasks.sh:8 | |
| MEDIUM | Potential Data Exfiltration to Feishu Both `auto-daily-task.sh` and `daily-ai-pipeline.sh` scripts define `FEISHU_USER` (read from environment variables) and explicitly prepare and indicate an intent to send collected reports to Feishu. While the scripts themselves do not directly execute the sending, they set up the pipeline for the OpenClaw agent to do so. If the collected data (from untrusted external sources) contains sensitive information, or if the skill's scope expands to collect private user data, sending these reports to an external service like Feishu could lead to data exfiltration. Carefully review the type of data being collected and included in reports sent to external services. Implement strict data minimization policies. If sensitive data is ever collected, ensure explicit user consent and robust encryption/anonymization before transmission. For `FEISHU_USER`, ensure it's managed securely and not logged or exposed. Confirm that the OpenClaw agent's Feishu integration has appropriate access controls and data handling policies. | LLM | scripts/auto-daily-task.sh:18 |
Scan History
Embed Code
[](https://skillshield.io/report/f17213020739d97d)
Powered by SkillShield