Trust Assessment
github-action-gen received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 2 critical, 0 high, 1 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Prompt Injection via user-controlled input to LLM, Potential Command Injection via Maliciously Generated GitHub Actions YAML.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via user-controlled input to LLM The `description` argument, which is directly controlled by the user, is interpolated into the `user` message sent to the OpenAI API without any sanitization or validation. An attacker can craft a malicious `description` to perform prompt injection, manipulating the underlying LLM (gpt-4o-mini) to generate arbitrary or malicious GitHub Actions YAML. This could bypass the system prompt's instruction to 'Return ONLY the YAML content, no explanation.' Implement robust input validation and sanitization for the `description` argument before it is passed to the LLM. Consider using a structured input approach (e.g., JSON schema, Pydantic) or a dedicated LLM guardrail library to constrain the LLM's output and prevent malicious generation. Additionally, validate the generated YAML against a schema or a set of security rules before writing it to a file or returning it to the user. | LLM | src/index.ts:12 | |
| CRITICAL | Potential Command Injection via Maliciously Generated GitHub Actions YAML Following a successful prompt injection (SS-LLM-001), an attacker could manipulate the LLM to generate GitHub Actions YAML containing malicious `run:` steps or other commands. If a user then executes this generated YAML in a CI/CD environment (e.g., by committing it to a repository and triggering a workflow), it could lead to command injection, allowing the execution of arbitrary shell commands with the permissions of the CI/CD runner. This could result in data exfiltration, unauthorized access, or system compromise. In addition to preventing prompt injection, implement post-generation validation of the YAML content. Scan the generated YAML for suspicious patterns, known malicious commands, or deviations from expected structure before writing it to disk or presenting it to the user for execution. Consider using a linter or security scanner specifically designed for GitHub Actions workflows. Educate users about the risks of executing untrusted or AI-generated code without review. | LLM | src/cli.ts:23 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/action-gen/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/f6a3a282b4fae3c5)
Powered by SkillShield