Trust Assessment
openclaw-gen received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Direct user input passed to LLM prompt parameter.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Direct user input passed to LLM prompt parameter The skill's described usage pattern explicitly states that '用户需求描述' (user requirement description) is taken directly from the user and passed as the `prompt` parameter to the `llm-task` tool. This tool is identified as an 'LLM-task interface'. This design allows an attacker to inject malicious instructions into the underlying LLM's prompt, potentially overriding system instructions, extracting sensitive information, or generating unintended outputs. Implement robust input sanitization and validation for user-provided prompts. Use techniques like prompt templating, instruction/user input separation (e.g., using XML tags or specific delimiters), or a separate LLM-based input classifier/sanitizer to prevent malicious instructions from reaching the core LLM. Ensure the `llm-task` interface itself has built-in protections against prompt injection. | LLM | SKILL.md:34 |
Scan History
Embed Code
[](https://skillshield.io/report/f0175618a6322adb)
Powered by SkillShield