Trust Assessment
startclaw-optimizer received a trust score of 48/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 8 findings: 2 critical, 2 high, 3 medium, and 1 low severity. Key findings include Unsafe deserialization / dynamic eval, Missing required field: name, Node lockfile missing.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 3/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings8
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary Function Execution in BrowserGovernor The `queueBrowserAction` method in `components/browser-governor.js` directly executes the `action` function provided as an argument (`await queuedAction.action()`). If an attacker can control the `action` function passed to this method, they can execute arbitrary JavaScript code within the skill's environment, leading to full system compromise. Ensure that `action` functions passed to `queueBrowserAction` originate from trusted sources only. If user input influences the `action` function, it must be strictly validated and sanitized, or a safe sandbox environment should be used for execution. | LLM | components/browser-governor.js:62 | |
| CRITICAL | Arbitrary Function Execution in OptimizerScheduler The `execute` method in `components/scheduler.js` directly calls the `task` argument if it's a function (`task(context)`). Additionally, `preflight` and `postflight` hooks are executed directly (`await hook(context)`). If an attacker can control the `task` argument or inject malicious functions into the `hooks` (e.g., via configuration), they can execute arbitrary code within the skill's environment, leading to full system compromise. Ensure that `task` arguments and `hook` functions passed to `OptimizerScheduler` originate from trusted sources only. If user input influences these functions, it must be strictly validated and sanitized, or a safe sandbox environment should be used for execution. | LLM | components/scheduler.js:39 | |
| HIGH | Unlisted Dependency: tiktoken The `context-compaction.js` file uses `require('tiktoken')`, but this dependency is not declared in the `dependencies` section of `package.json`. This constitutes a supply chain risk as the version of `tiktoken` is unmanaged, potentially leading to version conflicts, unexpected behavior, or the installation of a malicious version if not explicitly managed elsewhere. Add `tiktoken` to the `dependencies` in `package.json` with a pinned or semver-compatible version (e.g., `"tiktoken": "^0.6.0"`). | LLM | package.json:1 | |
| HIGH | Potential Prompt Injection Vector via Unsanitized Input to LLM In `components/router.js`, the `selectModel` method includes the raw `task` string in the `rationale` field. If this `rationale` (or the `task` itself) is later passed directly to an LLM without sanitization, it creates a prompt injection vulnerability. Furthermore, in `context-compaction.js`, the `summarizeWithHaiku` function is a placeholder explicitly stating it will be implemented with an 'actual API call' to an LLM. It takes `messages` (which can contain untrusted user input). When implemented, if these `messages` are not properly sanitized before being sent to the LLM API, it will be a direct prompt injection vulnerability, allowing an attacker to manipulate the LLM's behavior. Implement robust input sanitization and validation for all user-controlled input (`task`, `messages`) before it is passed to any LLM. Consider using techniques like prompt templating, input filtering, or dedicated LLM input sanitization libraries. | LLM | components/router.js:44 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/idanmann10/startclaw-optimizer/context-compaction.js:106 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/idanmann10/startclaw-optimizer/SKILL.md:1 | |
| MEDIUM | Potential Data Exfiltration via Configurable Logger The `SubagentContextCompactor` in `context-compaction.js` allows a custom `logger` to be provided via `options.logger`. If a malicious or misconfigured logger is supplied that writes to files or sends data over a network, sensitive information (e.g., `messages` content, error details) passed to `logger.warn`, `logger.info`, or `logger.error` could be exfiltrated. The default `console` logger is safe, but the customizability introduces risk. Restrict the `logger` configuration to trusted implementations. If custom loggers are allowed, ensure they are sandboxed or strictly audited. Avoid logging sensitive user data directly without redaction or encryption. | LLM | context-compaction.js:7 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/idanmann10/startclaw-optimizer/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/8b84abcb795b76fe)
Powered by SkillShield