Trust Assessment
k8s-gen received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 1 medium, and 1 low severity. Key findings include Unpinned npm dependency version, Arbitrary file content sent to external AI, User input directly injected into LLM prompt.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 53/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | User input directly injected into LLM prompt The `ai-k8s` tool constructs an LLM prompt where the user-provided `<input>` (either a plain English description or the content of a file) is directly inserted into the `user` message. A malicious user could craft this input to include instructions that manipulate the underlying `gpt-4o-mini` model, potentially leading to unintended outputs, disclosure of system prompt details, or other adversarial behaviors. This is a classic prompt injection vulnerability. Implement robust prompt engineering techniques to isolate user input from system instructions. This could involve using specific delimiters for user input, instructing the LLM to treat content within those delimiters as data only, or using a separate tool call for processing file content before generating the prompt. Ensure the system prompt is robust against adversarial instructions. | LLM | src/index.ts:42 | |
| HIGH | Arbitrary file content sent to external AI The `ai-k8s` tool reads the content of any local file specified as the `<input>` argument if `isFilePath` determines it's a file. This content is then sent directly to the OpenAI API. This poses a significant data exfiltration risk, as users might inadvertently provide paths to sensitive files (e.g., configuration files, private keys, environment files) whose contents would then be transmitted to OpenAI without explicit warning or consent for that specific file type. The `isFilePath` function is overly permissive, checking for any existing file. Implement strict validation for file inputs, allowing only specific file extensions (e.g., `.yml`, `.yaml`) or file names (e.g., `docker-compose.yml`). Alternatively, prompt the user for explicit confirmation before sending the content of a detected file to the external AI, especially if it's not a recognized `docker-compose` file. Clearly inform users that file contents are transmitted to OpenAI. | LLM | src/index.ts:20 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/k8s-gen/package.json | |
| LOW | Unpinned dependencies in package.json The `package.json` file specifies dependencies using caret (`^`) ranges (e.g., `^12.1.0`). While `package-lock.json` pins exact versions, relying on caret ranges in `package.json` can lead to non-deterministic builds if the lockfile is not consistently used or if new versions with vulnerabilities are published within the specified range. This increases the risk of supply chain attacks or unexpected behavior changes due to transitive dependency updates. Pin exact versions for all dependencies in `package.json` to ensure deterministic builds and prevent unexpected updates. For example, change `^12.1.0` to `12.1.0`. Regularly audit dependencies for known vulnerabilities. | LLM | package.json:16 |
Scan History
Embed Code
[](https://skillshield.io/report/d28ac0a7fc873de5)
Powered by SkillShield