Trust Assessment
animation-gen received a trust score of 64/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 2 high, 1 medium, and 1 low severity. Key findings include Unpinned npm dependency version, Prompt Injection via User Description, Arbitrary File Write via User-Controlled Output Path.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 68/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Prompt Injection via User Description The user-provided 'description' argument is directly concatenated into the LLM's 'user' message without sanitization. A malicious user could craft the description to manipulate the LLM's behavior, override system instructions, or attempt to extract sensitive information from the model. Implement robust input sanitization for the 'description' argument before passing it to the LLM. Consider using a more structured input format for the LLM or a dedicated prompt templating library that safely handles user input, ensuring the system prompt remains immutable. | LLM | src/index.ts:16 | |
| HIGH | Arbitrary File Write via User-Controlled Output Path The skill uses `fs.writeFileSync(opts.output, code)` where `opts.output` is directly controlled by user input via the `-o, --output <file>` option. This allows an attacker to specify an arbitrary file path on the system, potentially overwriting critical system files or writing malicious code to executable locations (e.g., shell scripts, configuration files) if the process has sufficient permissions. The 'code' being written is AI-generated, which could also be manipulated via prompt injection to produce harmful content. Validate and sanitize the `opts.output` path. Restrict file writes to a designated, non-sensitive output directory. Prevent directory traversal (`../`) and absolute paths outside the intended scope. Consider adding a confirmation prompt before writing to existing files or sensitive locations. Ensure the AI-generated content is also reviewed for potentially harmful constructs before writing to disk. | LLM | src/cli.ts:22 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/animation-gen/package.json | |
| LOW | Access to OPENAI_API_KEY environment variable The skill accesses the `OPENAI_API_KEY` directly from `process.env`. While this is standard practice for API key management, it means the API key is present in the process's environment. If other vulnerabilities (e.g., command injection, prompt injection leading to code generation) were exploited, this key could potentially be exfiltrated. This is a general risk associated with handling sensitive credentials. Ensure the environment where the skill runs has strict access controls. Minimize the permissions of the process executing the skill. While direct access to `process.env` is necessary here, mitigating other vulnerabilities (like prompt injection and arbitrary file write) is crucial to prevent its exfiltration. | LLM | src/index.ts:3 |
Scan History
Embed Code
[](https://skillshield.io/report/8976eb7836832d8f)
Powered by SkillShield