Trust Assessment
mind-blow received a trust score of 82/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include User input directly interpolated into LLM prompt, User-controlled target for Feishu message delivery.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | User input directly interpolated into LLM prompt The `intensity` and `topic` parameters, which are user-controlled inputs, are directly embedded into the prompt string sent to the Gemini LLM without any sanitization or escaping. This allows a malicious user to inject instructions into the prompt, potentially manipulating the LLM's behavior, extracting sensitive information, or generating unintended content. Implement robust input validation and sanitization for `intensity` and `topic`. For `intensity`, restrict to an enum of allowed values (`low`, `medium`, `high`, `max`). For `topic`, consider using a structured input for the LLM (e.g., a separate `topic` field in a JSON object if the API supports it) or carefully escaping/quoting the input within the prompt to prevent it from being interpreted as instructions. Alternatively, use a separate LLM call to validate or rephrase the topic before including it in the main prompt. | LLM | blow.js:24 | |
| MEDIUM | User-controlled target for Feishu message delivery The `target` option allows a user to specify an arbitrary Feishu `open_id` or `chat_id` for message delivery. While the skill's intended output is "mind-blowing insights," if combined with a successful prompt injection attack (SS-LLM-001), this vulnerability could be exploited to exfiltrate sensitive information generated by the LLM to an attacker-controlled Feishu account. Even without sensitive data, it allows redirection of the skill's output to an unintended recipient. Restrict the `target` parameter to a predefined list of allowed recipients or ensure that the target ID corresponds to the invoking user or a trusted group. If arbitrary targeting is required, implement strict authorization checks to ensure the invoking user has permission to send messages to the specified `target`. | LLM | blow.js:70 |
Scan History
Embed Code
[](https://skillshield.io/report/78d22c997906de01)
Powered by SkillShield