Trust Assessment
webhook-gen received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Direct Prompt Injection via User Input, Arbitrary File Write via User-Controlled Path and LLM Output.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Direct Prompt Injection via User Input The skill directly injects user-provided `eventDescription` and `framework` into the LLM's system and user prompts without sanitization or validation. A malicious user can craft these inputs to manipulate the LLM's behavior, leading to the generation of arbitrary, potentially harmful code or the disclosure of sensitive information (e.g., system prompt details). This is a critical vulnerability as the core function of the skill is to generate code, and prompt injection can subvert this to generate malicious code. Implement robust input validation and sanitization for `eventDescription` and `framework`. For `framework`, use an allow-list of supported values instead of direct injection. For `eventDescription`, consider techniques like prompt templating with proper escaping, or an additional LLM layer for input classification/redaction to detect and mitigate malicious prompts before they reach the main generation LLM. Ensure the LLM is sandboxed and cannot access sensitive system resources. | LLM | src/index.ts:12 | |
| HIGH | Arbitrary File Write via User-Controlled Path and LLM Output The CLI tool allows users to specify an arbitrary output file path (`-o, --output <path>`) using `fs.writeFileSync(path.resolve(options.output), result, 'utf-8')`. Combined with the prompt injection vulnerability (SS-LLM-001), a malicious actor could inject a prompt that causes the LLM to generate harmful code (e.g., a shell script to delete files, a malicious configuration file, or a script to exfiltrate data). If the user is then tricked into saving this malicious output to a sensitive system path, it could lead to severe system compromise or data loss. While `path.resolve` mitigates simple directory traversal, it does not prevent writing to any valid path the user has permissions for. Mitigate the underlying prompt injection vulnerability (SS-LLM-001) to prevent the generation of malicious code. Additionally, consider restricting the output paths to a designated safe directory or prompting the user for explicit confirmation before writing to sensitive or absolute paths. Implement content scanning on the LLM's output to detect potentially malicious code before writing it to disk. | LLM | src/cli.ts:20 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/webhook-gen/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/abc54ab9bfb1fad3)
Powered by SkillShield