Trust Assessment
wiki-gen received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unpinned npm dependency version, LLM Prompt Injection via Untrusted File Content, Path Traversal Leading to Arbitrary File Read and Data Exfiltration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Path Traversal Leading to Arbitrary File Read and Data Exfiltration The skill uses `fs.readdirSync` and `fs.readFileSync` with an `input` directory argument that is directly provided by the user via the CLI. This allows for a path traversal vulnerability. A malicious user could provide a path like `../../../../etc` to read sensitive system files (e.g., `/etc/passwd`, `/etc/shadow`, API keys, configuration files) outside the intended project scope. The content of these files is then included in the `userContent` and sent to the OpenAI API, constituting data exfiltration to a third-party service. The `slice(0, 2000)` only limits the size, not the sensitivity, of the exfiltrated data. Implement robust path sanitization and validation for the `directory` argument to prevent path traversal. Ensure that the resolved path is strictly within the intended project directory. For example, resolve the path to an absolute path and then check if it starts with the expected project root. | LLM | src/index.ts:7 | |
| HIGH | LLM Prompt Injection via Untrusted File Content The skill constructs the user prompt for the OpenAI model by embedding content from files read from a user-specified directory. If a malicious file (e.g., a `.js` file containing prompt injection instructions) is present in the scanned directory, its content will be included in the prompt sent to the OpenAI model. This could allow an attacker to manipulate the LLM's behavior, extract system prompts, or generate unintended outputs. Sanitize or validate file content before embedding it into the LLM prompt. Consider using a dedicated parsing library for code files to extract only relevant, non-executable parts, or implement strict content filtering. Alternatively, warn users about the risk of scanning untrusted codebases. | LLM | src/index.ts:13 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/wiki-gen/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/d60ca88e888fd893)
Powered by SkillShield