Trust Assessment
lighthouse-fixer received a trust score of 60/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 0 high, 2 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Command Injection via unsanitized URL in execSync, Potential Prompt Injection via Lighthouse Report Summary.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via unsanitized URL in execSync The `runLighthouse` function directly interpolates the user-provided `url` argument into a shell command executed via `child_process.execSync`. This allows an attacker to inject arbitrary shell commands by crafting a malicious URL containing shell metacharacters (e.g., `;`, `&&`, `|`, `$(...)`). For example, a URL like `https://example.com; rm -rf /` could lead to arbitrary code execution on the host system. Avoid direct interpolation of user input into shell commands. Use `child_process.spawn` or `child_process.execFile` with arguments passed as an array to prevent shell interpretation. If `execSync` is strictly necessary, thoroughly sanitize and escape the `url` parameter using a robust shell escaping library before inclusion in the command string. | LLM | src/index.ts:11 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/lighthouse-fix/package.json | |
| MEDIUM | Potential Prompt Injection via Lighthouse Report Summary The `getAIFixes` function sends a `summary` of the Lighthouse report to the OpenAI LLM as user content. This `summary` is derived from the Lighthouse report, which is generated by auditing a user-provided URL. If a malicious website can craft its content (e.g., HTML title, meta descriptions, or other elements that Lighthouse reports) to include prompt injection instructions, these instructions could be passed to the LLM, potentially manipulating its behavior or extracting sensitive information. Implement strict sanitization, filtering, or encoding of the `summary` content before sending it to the LLM, especially for fields that can be influenced by the target URL. Consider using a more robust prompt engineering technique to isolate user input from system instructions, such as enclosing user input in specific delimiters that the LLM is instructed to treat as literal text. | LLM | src/index.ts:50 |
Scan History
Embed Code
[](https://skillshield.io/report/df98029b75200a75)
Powered by SkillShield