Trust Assessment
idea received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 6 findings: 1 critical, 2 high, 2 medium, and 1 low severity. Key findings include Sensitive environment variable access: $HOME, Prompt Injection via unsanitized user input, Excessive Permissions granted to Claude CLI.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 31/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via unsanitized user input The user-provided 'IDEA' is directly embedded into the prompt file ('PROMPT_FILE') without any sanitization or escaping. This allows an attacker to inject malicious instructions into the LLM's prompt, potentially overriding its original directives, extracting sensitive information, or manipulating its behavior. Sanitize or escape user input before embedding it into the LLM prompt. Consider using a templating system that strictly separates user-controlled data from system instructions. Implement input validation to restrict the content of 'IDEA'. | LLM | scripts/explore-idea.sh:31 | |
| HIGH | Excessive Permissions granted to Claude CLI The 'claude' CLI is executed with the '--dangerously-skip-permissions' flag. This explicitly bypasses security checks and grants the LLM broader access than it might otherwise have. In combination with prompt injection, this significantly increases the potential blast radius for data exfiltration or unauthorized actions. Remove the '--dangerously-skip-permissions' flag. Carefully evaluate and define the minimum necessary permissions for the Claude CLI to function, adhering to the principle of least privilege. | LLM | scripts/explore-idea.sh:60 | |
| HIGH | Hardcoded API Key/Token A 'HOOKS_TOKEN' is hardcoded directly within the 'notify-research-complete.sh' script. Hardcoding credentials makes them vulnerable to exposure if the script is accessed or committed to a public repository. This token could be used to authenticate with the Clawdbot gateway. Store the 'HOOKS_TOKEN' in a secure environment variable, a secrets manager, or a configuration file with restricted access. Load the token at runtime from the secure source instead of hardcoding it. | LLM | scripts/notify-research-complete.sh:18 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/andrewjiang/idea/scripts/explore-idea.sh:20 | |
| MEDIUM | Potential Command Injection via unsanitized argument to external command The user-controlled 'IDEA' is passed as the '$TITLE' argument to the 'telegram send-file' command. Although the argument is double-quoted, a sophisticated attacker could potentially craft the 'IDEA' to exploit vulnerabilities in the 'telegram' CLI's argument parsing or trigger unintended shell behavior, leading to command injection. The sanitization for JSON ('SAFE_TITLE') occurs after this call. Rigorously sanitize or escape all user-controlled arguments passed to external commands according to the specific parsing rules of the target command. Consider using a safer method for interacting with external tools or a library designed for secure command execution. | LLM | scripts/notify-research-complete.sh:23 | |
| LOW | Data Exfiltration of LLM-generated content The skill is designed to send the LLM-generated 'research.md' file to 'Telegram Saved Messages'. While this is intended functionality, if the LLM is compromised via prompt injection (as identified), it could be instructed to include sensitive information it has access to within 'research.md', leading to unintended data exfiltration to the user's Telegram account. Implement strict output filtering and content validation for LLM-generated content, especially when it is destined for external communication channels. Restrict the LLM's access to sensitive data to minimize the impact of a potential prompt injection. | LLM | scripts/notify-research-complete.sh:23 |
Scan History
Embed Code
[](https://skillshield.io/report/95703278c174622a)
Powered by SkillShield