Trust Assessment
adr-writer received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 1 high, 2 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Project metadata exfiltration to external LLM, Unsanitized user input in LLM prompt (Prompt Injection).
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unsanitized user input in LLM prompt (Prompt Injection) The user's command-line input (`decision`) is directly embedded into the LLM prompt without any sanitization or escaping. An attacker can craft malicious input to manipulate the LLM's behavior, potentially leading to unintended outputs, disclosure of system prompts, or other undesirable actions. Implement robust input sanitization or use structured input methods (e.g., JSON, XML tags) for the LLM prompt. Clearly separate user input from system instructions within the prompt to prevent user input from overriding or manipulating the LLM's directives. Consider using LLM-specific input validation or prompt templating libraries. | LLM | src/index.ts:7 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/adr-writer/package.json | |
| MEDIUM | Project metadata exfiltration to external LLM The skill reads the 'package.json' file from the current working directory, extracts the project name and its dependencies, and sends this information to the OpenAI API as part of the LLM prompt. This constitutes data exfiltration of local project metadata to a third-party service without explicit user consent or warning. Remove the automatic reading and sending of 'package.json' data. If this context is crucial, make it an explicit, opt-in feature with clear user notification and consent, or allow the user to provide this context manually. | LLM | src/index.ts:6 |
Scan History
Embed Code
[](https://skillshield.io/report/07353afcd47bd499)
Powered by SkillShield