Trust Assessment
create-plugin received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via Plugin Management Commands, Excessive Permissions for Generated Plugins, Potential Data Exfiltration via Local File Access.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via Plugin Management Commands The skill explicitly instructs the LLM to execute shell commands (`openclaw plugins install`, `openclaw plugins enable`) using arguments (`<id>`, `/path/to/plugin`) that can be derived from user input or LLM-generated content. This creates a critical command injection vulnerability where a malicious user or a compromised LLM could inject arbitrary shell commands by crafting a malicious plugin ID or path. Although the skill suggests asking the user for confirmation, this is a weak mitigation against a direct execution instruction. Implement robust input validation and sanitization for all arguments passed to shell commands. Prefer using dedicated APIs for plugin management that do not involve direct shell execution. If shell execution is unavoidable, ensure all user-controlled or LLM-generated arguments are strictly whitelisted or properly escaped/quoted to prevent command injection. The LLM should be explicitly instructed to never execute commands with unsanitized input. | LLM | SKILL.md:49 | |
| HIGH | Excessive Permissions for Generated Plugins The skill's core function is to create OpenClaw plugins that 'run in-process with the gateway' and 'can change how OpenClaw works'. This implies that generated plugins operate with high privileges, including file system write access to system directories (`~/.openclaw/extensions/<id>`) and the ability to execute code. If the LLM generates malicious or flawed plugin code, it could lead to severe system compromise, data loss, or unauthorized operations due to the inherent trust placed in plugins. Implement strict sandboxing and least-privilege principles for all generated plugins. The LLM should be heavily constrained in the types of code it can generate, especially regarding file system access, network calls, and system commands. The host system running OpenClaw should enforce robust security boundaries for plugins. Consider code review and approval workflows for generated plugin code before activation. | LLM | SKILL.md:6 | |
| MEDIUM | Potential Data Exfiltration via Local File Access The skill instructs the LLM to potentially access local documentation files (`<openclaw-repo>/docs/plugin.md`). While these are specific documentation paths, this pattern of instructing the LLM to read local files could be abused. If the LLM's file reading capabilities are not strictly confined to whitelisted paths, it could be prompted to read and exfiltrate sensitive files from the local file system. Ensure that any file system access granted to the LLM (e.g., via a `read_file` tool) is strictly limited to whitelisted directories and file types. Implement robust path validation to prevent directory traversal attacks. The LLM should be explicitly instructed to only access necessary documentation files and never to read arbitrary user or system files. | LLM | SKILL.md:17 |
Scan History
Embed Code
[](https://skillshield.io/report/9d6390a66bdde15f)
Powered by SkillShield