Trust Assessment
error-guard received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 1 medium, and 1 low severity. Key findings include Command Injection via sessions_spawn 'task' parameter, Excessive Permissions: Ability to terminate arbitrary sessions, Potential Prompt Injection via 'meta' field in task events.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 46/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via sessions_spawn 'task' parameter The `spawnWorker` function in `spawn.ts` takes a `message` parameter and passes it directly as the `task` argument to `@openclaw/sdk.sessions_spawn`. If `opts.message` can be influenced by untrusted user input (e.g., through a prompt to the main LLM), an attacker could inject arbitrary code or instructions to be executed by the spawned sub-agent. This bypasses security boundaries and allows for arbitrary command execution or prompt manipulation within the sub-agent's context. Implement strict input validation and sanitization for the `opts.message` parameter. If `message` is intended to be code, ensure it's executed in a highly sandboxed environment or only allows a predefined set of safe operations. If it's intended for an LLM, apply robust prompt engineering techniques to prevent injection, or use a dedicated, hardened LLM call that separates instructions from user input. Ideally, `message` should only accept trusted, pre-defined tasks. | LLM | spawn.ts:16 | |
| HIGH | Excessive Permissions: Ability to terminate arbitrary sessions The `flush` function in `control.ts` uses `@openclaw/sdk.process.list` and `process.kill` to terminate all active sessions. While intended for system recovery, this grants the skill broad permissions to disrupt other operations. If the `/flush` command can be triggered by untrusted input (e.g., via a prompt injection manipulating the main LLM to call this skill), it could lead to a denial-of-service attack by terminating legitimate tasks. The `sessionId` is derived from `process.list`, so direct injection into `sessionId` is unlikely, but the ability to trigger the mass kill is the concern. Implement strict access control and authorization checks for the `/flush` and `/recover` commands. Ensure these commands can only be invoked by trusted system administrators or with explicit, multi-factor user confirmation, especially when triggered by an LLM. Consider limiting the scope of `process.kill` if possible, or requiring specific permissions for its invocation. | LLM | control.ts:60 | |
| MEDIUM | Potential Prompt Injection via 'meta' field in task events The `worker-events.ts` module sends task events containing a `meta` field (`Record<string, any>`) via `sessions_send`. This `meta` field can include arbitrary data, such as error messages (`meta: { error: String(e) }`). If these event messages are later consumed and processed by an LLM in the main session, an attacker could craft malicious input that triggers an error containing a prompt injection payload. This payload could then manipulate the main LLM's behavior or extract sensitive information from its context. Implement strict sanitization or schema validation for all data placed into the `meta` field, especially for error messages or any user-controlled input. When processing these events with an LLM, ensure proper separation of instructions from data, or use a dedicated LLM call that is hardened against prompt injection for event processing. Avoid including sensitive system details in error messages. | LLM | worker-events.ts:14 | |
| LOW | Direct File System Access for State Persistence The `state.ts` module directly uses Node.js `fs` module functions (`readFileSync`, `writeFileSync`) to manage `state.json`. While the file path is fixed within the skill's directory (`process.cwd() + "skills/error-guard/state.json"`) and the stored data is described as non-sensitive task metadata, direct file system access is a powerful permission. A vulnerability in path handling (e.g., if `process.cwd()` could be manipulated or if `STATE_PATH` was constructed with untrusted input) could lead to arbitrary file read/write. Currently, the path seems safe, but the permission itself is notable. Ensure that `STATE_PATH` and any related file operations are absolutely immune to path traversal or other file system manipulation attacks. If the platform offers a sandboxed or dedicated storage API for skill state, prefer using that over direct `fs` access to reduce the attack surface. | LLM | state.ts:14 |
Scan History
Embed Code
[](https://skillshield.io/report/30945336cc080e3c)
Powered by SkillShield