Trust Assessment
form-gen received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 2 medium, and 0 low severity. Key findings include Unpinned npm dependency version, User input directly interpolated into LLM prompt, LLM can be prompted to generate malicious code.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 5acc5677). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | User input directly interpolated into LLM prompt The 'description' argument, which is user-provided input, is directly interpolated into the 'user' role message sent to the OpenAI API without any sanitization or escaping. An attacker can craft a malicious 'description' to manipulate the LLM's behavior, override system instructions, or attempt to extract sensitive information from the LLM's context. This could lead to the generation of malicious code or data exfiltration. Implement robust input sanitization or use techniques like XML/JSON tagging, turn-taking, or content filtering to isolate user input from system instructions. For example, wrap user input in specific delimiters that the LLM is instructed to treat as literal data, or use a separate tool call for user input. | LLM | src/index.ts:12 | |
| HIGH | LLM can be prompted to generate malicious code Due to the prompt injection vulnerability (SS-LLM-001), an attacker can instruct the LLM to generate arbitrary code, including malicious scripts. If the generated code is subsequently written to a file (via the '-o' option in `src/cli.ts`) and executed by the user, it could lead to command injection, data exfiltration, or other system compromises. The skill's core function is code generation, making this a high-impact risk if the generation process is subverted. Address the underlying prompt injection vulnerability (SS-LLM-001). Additionally, consider implementing a sandbox or static analysis for generated code before it is written to disk or presented to the user, especially if the skill is intended for environments where generated code might be automatically executed. | LLM | src/cli.ts:21 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/form-gen/package.json | |
| MEDIUM | API Key loaded from environment variable, vulnerable to exfiltration via prompt injection The `OPENAI_API_KEY` is loaded directly from `process.env`. While this is a common and generally secure practice for secrets, the presence of a prompt injection vulnerability (SS-LLM-001) means an attacker could potentially craft a prompt that instructs the LLM to generate code designed to read and exfiltrate environment variables, including the API key, if the generated code is subsequently executed. This represents an indirect path to credential harvesting. Address the prompt injection vulnerability (SS-LLM-001). Ensure that the LLM's capabilities are strictly limited and that it cannot access or reveal sensitive environment variables. If the generated code is to be executed in an untrusted environment, consider using a more secure method for API key management, such as a secrets manager that injects the key only at runtime into a secure context. | LLM | src/index.ts:3 |
Scan History
Embed Code
[](https://skillshield.io/report/0168e69eb58a6efb)
Powered by SkillShield