Trust Assessment
core-vitals-fixer received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Prompt Injection via User Code, Data Exfiltration of User Source Code.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Prompt Injection via User Code The skill constructs an LLM prompt by directly embedding user-provided source code into the 'user' message. Malicious code within the scanned files could attempt to manipulate the LLM's behavior, extract system prompts, or steer the conversation in unintended ways, potentially leading to information disclosure or altered output. Implement robust input sanitization or use a more advanced prompt engineering technique (e.g., XML tags, JSON structure, or a separate 'code' role) to clearly delineate user code from instructions, making it harder for the LLM to misinterpret code as instructions. Consider using a model that is specifically fine-tuned for code analysis and less susceptible to prompt injection from code. | LLM | src/index.ts:30 | |
| HIGH | Data Exfiltration of User Source Code The skill's core functionality involves reading local source code files from the user's system and transmitting their content to the OpenAI API for analysis. This constitutes data exfiltration, as potentially sensitive or proprietary user code is sent to an external, third-party service. While this is the stated purpose of the skill, users should be explicitly aware of this data transfer. Ensure clear and prominent disclosure to the user that their code will be sent to a third-party AI service. Provide options for local-only processing if feasible, or allow users to redact sensitive sections of their code before submission. Emphasize data privacy policies of the AI provider. | LLM | src/index.ts:28 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/core-vitals-fixer/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/675eea26be74f77c)
Powered by SkillShield