Trust Assessment
llm-supervisor-agent received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 6 findings: 2 critical, 1 high, 2 medium, and 1 low severity. Key findings include Network egress to untrusted endpoints, Unsafe deserialization / dynamic eval, Missing required field: name.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 25/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/dhardie/llm-supervisor-agent/dist/hooks/onAgentStart.js:18 | |
| CRITICAL | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/dhardie/llm-supervisor-agent/hooks/onAgentStart.ts:23 | |
| HIGH | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/dhardie/llm-supervisor-agent/dist/hooks/beforeTaskExecute.js:29 | |
| MEDIUM | Missing required field: name The 'name' field is required for openclaw skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/dhardie/llm-supervisor-agent/SKILL.md:1 | |
| MEDIUM | Potential Prompt Injection via Unsanitized LLM Error Message The skill stores and displays raw error messages received from upstream LLMs (`event.error?.message`) in its internal state (`state.lastError`) and then outputs them directly to the user via `cmd.reply()` and `ctx.notify.all()`. If an attacker can trigger an LLM error with a crafted message (e.g., by sending a malformed request to the upstream LLM provider that causes it to return a specific error string), and if this error message contains instructions that the host LLM interprets as commands, it could lead to a prompt injection. The skill does not sanitize or escape these error messages before displaying them. Sanitize or escape the `event.error?.message` string before storing it in `state.lastError` and before displaying it via `cmd.reply()` or `ctx.notify.all()`. This could involve stripping markdown, special characters, or known prompt injection phrases to prevent the host LLM from misinterpreting the error message as instructions. | LLM | hooks/onLLMError.ts:30 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/dhardie/llm-supervisor-agent/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/790f09988b3f33da)
Powered by SkillShield