Trust Assessment
vitals-fixer received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Prompt Injection via User-Controlled Source Code, Potential Data Exfiltration via Prompt Injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via User-Controlled Source Code The skill directly incorporates user-provided source code content into the 'user' message of an OpenAI API call without sanitization or validation. An attacker can embed malicious instructions within their source files (e.g., JavaScript comments, HTML comments, CSS rules) to manipulate the behavior of the `gpt-4o-mini` LLM. This could lead to the LLM ignoring its system prompt, generating unintended output, or performing actions not authorized by the skill's design. Implement robust input sanitization and validation for user-provided code before it is sent to the LLM. Consider using a separate, isolated LLM call for processing user-provided code snippets versus system-level instructions. Techniques like input/output parsing, content filtering, or using a less capable model for untrusted input can mitigate this risk. Ensure the LLM's system prompt is sufficiently robust to resist common injection attempts. | LLM | src/index.ts:30 | |
| HIGH | Potential Data Exfiltration via Prompt Injection Leveraging the prompt injection vulnerability (SS-LLM-001), an attacker could instruct the LLM to extract and output sensitive information present in the scanned source files. The `scanSourceFiles` function reads a broad range of file types (js, ts, jsx, tsx, html, css, vue, svelte) from the user-specified directory. If these files contain credentials, API keys, personal data, or other sensitive information, a successful prompt injection could lead to this data being exfiltrated through the LLM's response. In addition to addressing prompt injection, restrict the types of files that can be scanned to only those strictly necessary for the skill's function. Implement content filtering for sensitive patterns (e.g., regex for API keys, common credential formats) before sending data to the LLM. Advise users against scanning directories containing sensitive configuration files or private data. | LLM | src/index.ts:30 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/vitals-fixer/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/4fd181d4f98262b2)
Powered by SkillShield