Trust Assessment
lighthouse-fixer received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Unsanitized URL in execSync leads to command injection, Untrusted Lighthouse report content used in LLM prompt.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Unsanitized URL in execSync leads to command injection The `runLighthouse` function directly interpolates the user-provided `url` argument into a shell command executed via `child_process.execSync`. This allows an attacker to inject arbitrary shell commands by crafting a malicious URL string (e.g., `'; rm -rf /;'`). The `--no-sandbox` flag for Chrome further reduces isolation, though the primary issue is the command injection vulnerability. Sanitize or escape the `url` argument before passing it to `execSync`. Consider using `child_process.spawn` with an array of arguments instead of a single command string, or a dedicated shell escaping utility to prevent arbitrary command execution. | LLM | src/index.ts:10 | |
| HIGH | Untrusted Lighthouse report content used in LLM prompt The `getAIFixes` function constructs a user prompt for the LLM using a `summary` generated by `summarizeReport`. This `summary` includes `val.title` and `val.displayValue` fields directly from the Lighthouse report, which is generated from an attacker-controlled URL. A malicious website could craft its content to inject prompt instructions into these fields, potentially manipulating the LLM's behavior or extracting sensitive information. Implement robust sanitization or filtering of `val.title` and `val.displayValue` before they are included in the `summary` string that is sent to the LLM. Consider using an allow-list approach for content or escaping special characters relevant to prompt injection. | LLM | src/index.ts:48 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/lighthouse-fixer/package.json | |
| INFO | Unpinned dependencies in package.json The `package.json` file uses caret (`^`) version ranges for all dependencies (e.g., `openai: ^4.73.0`). This allows npm to install newer minor or patch versions automatically. While convenient, it introduces a supply chain risk as a compromised dependency update could introduce malicious code without explicit review. It is generally recommended to pin exact versions for security-sensitive projects. Pin exact versions for all dependencies (e.g., `openai: "4.73.0"`) or ensure that `package-lock.json` is consistently used and audited for deployments. Regularly audit dependencies for known vulnerabilities. | LLM | package.json:9 |
Scan History
Embed Code
[](https://skillshield.io/report/7074c4474666df0b)
Powered by SkillShield