Trust Assessment
avatar received a trust score of 51/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 7 findings: 1 critical, 1 high, 4 medium, and 1 low severity. Key findings include Sensitive environment variable access: $HOME, Unpinned npm dependency version, Arbitrary code execution via LLM output (eval).
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 39/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings7
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary code execution via LLM output (eval) The server-side code in `src/server.ts` directly uses `eval(p.code)` to execute JavaScript code received from the OpenClaw gateway. The `p.code` payload originates from the AI agent's response, which is untrusted content. An attacker capable of manipulating the LLM's output could inject and execute arbitrary JavaScript code on the server, leading to full system compromise. Remove the use of `eval()`. Instead of executing arbitrary code, define a strict set of allowed functions or a sandboxed environment for agent-provided logic. If dynamic code execution is absolutely necessary, implement a secure sandboxing mechanism (e.g., a dedicated worker process with strict permissions, or a WebAssembly sandbox) and rigorous input validation. | LLM | src/server.ts:174 | |
| HIGH | Simli API Key exposed client-side The `simliApiKey` is included in the `ClientConfig` object, which is sent to the browser via the `/api/client-config` endpoint. This exposes the API key to the client-side, making it vulnerable to theft by an attacker who can inspect network traffic, use browser developer tools, or exploit client-side vulnerabilities like Cross-Site Scripting (XSS). Compromise of this key could lead to unauthorized use of the Simli service. The Simli API key should be kept server-side. Instead of sending the raw key, implement a proxy on the server that handles requests to the Simli API, authenticating with the key server-side. The client would then make requests to this server-side proxy. | LLM | src/config/index.ts:109 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/johannes-berggren/avatar/start-kiosk.sh:67 | |
| MEDIUM | Unpinned npm dependency version Dependency 'dotenv' is not pinned to an exact version ('^16.4.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/johannes-berggren/avatar/package.json | |
| MEDIUM | Unpinned dependency `simli-client` The `package.json` file specifies `simli-client: "latest"`. While `package-lock.json` currently pins a specific version, relying on `"latest"` in `package.json` is a supply chain risk. If the `latest` tag for `simli-client` on npm is updated to a malicious version in the future, new installations (especially without a `package-lock.json` or if the lock file is ignored) could pull in compromised code. Pin the `simli-client` dependency to a specific version (e.g., `"simli-client": "1.0.1"`) in `package.json` to ensure deterministic and secure dependency resolution. Regularly review and update dependencies. | LLM | package.json:32 | |
| MEDIUM | Potential XSS in markdown rendering The `detail` content from the AI agent's response is rendered directly into the DOM using `window.marked.parse(chatResponse.detail)` and then assigned to `innerHTML`. While `marked` has built-in sanitization, LLM outputs are inherently untrusted. If `marked`'s sanitization is bypassed or misconfigured, a malicious LLM response containing HTML/JavaScript could lead to Cross-Site Scripting (XSS), allowing an attacker to execute arbitrary client-side code, potentially leading to data exfiltration (e.g., stealing the `simliApiKey` which is already client-side). Ensure `marked` is configured with strict sanitization, or consider using a more robust sanitization library like `DOMPurify` on the output of `marked.parse` before assigning to `innerHTML`. Alternatively, render markdown in a sandboxed iframe or use a templating engine that automatically escapes HTML. | LLM | src/client/app.ts:202 | |
| LOW | `osascript` command injection pattern The `start-kiosk.sh` script constructs and executes AppleScript commands using `osascript -e "..."`. Variables like `${WIN_X}` and `${WIN_Y}` are interpolated directly into the command string. While in this specific case, these variables are derived from system screen properties and are unlikely to contain malicious characters, this pattern is a general command injection vulnerability if the interpolated variables were sourced from untrusted input. When constructing shell commands with interpolated variables, always validate and sanitize input to prevent injection. For numerical values, ensure they are strictly numbers. For string values, use proper escaping mechanisms or pass them as arguments to the script rather than interpolating directly into the command string. | LLM | start-kiosk.sh:60 |
Scan History
Embed Code
[](https://skillshield.io/report/fd35f6cbf256adab)
Powered by SkillShield