Trust Assessment
stash-namer received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 2 critical, 2 high, 0 medium, and 0 low severity. Key findings include Potential Data Exfiltration via LLM API, Command Injection via LLM-generated output, Prompt Injection Risk against Skill's LLM.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 10/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Data Exfiltration via LLM API The skill explicitly states it 'sends the diff summary to GPT-4o-mini'. This means the content of the user's code changes (git diff output), which can contain sensitive or proprietary information, is transmitted to a third-party LLM service (OpenAI). This constitutes a direct data exfiltration risk. Explicitly warn users about the transmission of their code changes to a third-party LLM. Provide options for redacting sensitive information from diffs or using a local/on-premise LLM if available. Ensure compliance with data privacy regulations and OpenAI's data usage policies. | LLM | SKILL.md:62 | |
| CRITICAL | Command Injection via LLM-generated output The skill uses an LLM-generated stash name directly in a shell command: 'runs git stash push -m with that name'. If the LLM generates a name containing shell metacharacters (e.g., ';', '`', '$'), it could lead to arbitrary command injection and execution on the user's system. This is a critical vulnerability when LLM outputs are not properly sanitized or escaped before being used in shell commands. Implement robust sanitization and escaping of the LLM-generated stash name before it is used in any shell command. Use a secure method for executing shell commands that automatically handles argument escaping (e.g., `shlex.quote` in Python, or similar functions in Node.js). | LLM | SKILL.md:62 | |
| HIGH | Prompt Injection Risk against Skill's LLM The skill feeds user-controlled content (the 'git diff summary') to an external LLM (GPT-4o-mini). A malicious user could craft their code changes to include prompt injection instructions, attempting to manipulate the LLM's behavior (e.g., to generate a harmful stash name, reveal internal prompts, or perform unintended actions). Implement robust input validation and sanitization on the `git diff` content before sending it to the LLM. Consider using LLM safety features, guardrails, or a separate classification model to detect and mitigate prompt injection attempts. Clearly define the LLM's role and constraints. | LLM | SKILL.md:62 | |
| HIGH | Unpinned Dependency in `npx` Command (Supply Chain Risk) The skill instructs users to run `npx ai-stash-name`. When a package version is not specified with `npx` (e.g., `ai-stash-name@1.0.0`), `npx` defaults to downloading and executing the `@latest` version from npm. This introduces a supply chain risk, as a malicious update to the `ai-stash-name` package could be automatically executed on the user's machine without explicit version review or approval. Recommend users to pin the version of the `npx` package (e.g., `npx ai-stash-name@1.0.0`) to ensure deterministic execution. The skill developer should also follow best practices for npm package security, including regular security audits and protecting against account compromise. | LLM | SKILL.md:12 |
Scan History
Embed Code
[](https://skillshield.io/report/1e14a936c69bd310)
Powered by SkillShield