Trust Assessment
tailwind-config-gen received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 1 critical, 2 high, 1 medium, and 1 low severity. Key findings include Unpinned npm dependency version, Unsanitized user input in LLM prompt, Arbitrary file write due to unsanitized output path.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 38/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Unsanitized user input in LLM prompt The `colors` array, which is directly provided by the user as command-line arguments, is interpolated into the `user` message sent to the OpenAI API without any sanitization or validation. This allows an attacker to inject arbitrary instructions into the LLM's prompt, potentially overriding system instructions, extracting sensitive information (e.g., environment variables like OPENAI_API_KEY), or generating malicious code. Implement robust input validation and sanitization for user-provided `colors`. Consider using a structured input format for the LLM (e.g., JSON) and validating each color string as a valid hex code before passing it to the LLM. If direct string interpolation is necessary, escape or filter out characters that could be interpreted as prompt instructions (e.g., `system:`, `user:`, `assistant:`). | LLM | src/index.ts:10 | |
| HIGH | Arbitrary file write due to unsanitized output path The skill allows users to specify an arbitrary output file path via the `-o` or `--output` option. This path is directly used in `fs.writeFileSync` without any validation or restriction. An attacker could specify a path to a sensitive system file (e.g., `/etc/passwd`, `~/.bashrc`) or an executable file, potentially overwriting it with content generated by the LLM (which itself could be malicious due to prompt injection). Restrict the output file path to a safe directory (e.g., the current working directory or a designated output folder). Validate that the provided path does not contain directory traversal sequences (e.g., `../`) and is not an absolute path outside the allowed scope. Consider only allowing a filename and prepending a safe base directory. | LLM | src/cli.ts:15 | |
| HIGH | LLM-generated content written to arbitrary file path, enabling command injection This finding combines the prompt injection vulnerability (SS-LLM-001) and the arbitrary file write vulnerability (SS-LLM-005). An attacker can inject instructions into the LLM prompt to generate malicious code (e.g., JavaScript, shell commands). This malicious output is then written to an arbitrary file path specified by the attacker. If this file is subsequently executed by the system (e.g., a `.js` file being `require`d, or a shell script being run), it could lead to arbitrary command execution on the host system. This is a composite risk. Remediating SS-LLM-001 (prompt injection) and SS-LLM-005 (arbitrary file write) will mitigate this issue. Additionally, if the generated `tailwind.config.js` is ever executed, ensure that its content is treated as untrusted and sandboxed, or that its execution context is limited. | LLM | src/cli.ts:15 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/tailwind-config-gen/package.json | |
| LOW | Unpinned dependencies in package.json The `package.json` file uses caret (`^`) ranges for dependencies (e.g., `"openai": "^4.73.0"`). While `package-lock.json` pins exact versions, if `package-lock.json` is not used or is deleted, `npm install` could fetch newer, potentially incompatible or vulnerable versions of packages. This introduces a slight supply chain risk as the exact dependency tree is not strictly enforced by `package.json` alone. Consider using exact version pinning in `package.json` (e.g., `"openai": "4.73.0"`) or using a dependency management system that strictly enforces lockfiles in all environments. Ensure that `package-lock.json` is always used during deployment and installation. | LLM | package.json:10 |
Scan History
Embed Code
[](https://skillshield.io/report/65761036a6a8125a)
Powered by SkillShield