Trust Assessment
stash-namer received a trust score of 66/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Unpinned npm dependency version, User's code diffs sent to external AI service, User-controlled git diff used in LLM prompt.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | User's code diffs sent to external AI service The skill captures the user's staged and unstaged git diffs and sends them directly to the OpenAI API (`gpt-4o-mini`) for generating a stash name. This means potentially sensitive or proprietary source code is transmitted to a third-party service. While the `SKILL.md` mentions this data transmission, users might not fully understand the privacy implications or the extent of data shared with an external entity. 1. **Explicit Consent/Warning**: Ensure users are explicitly warned about data transmission to OpenAI *before* execution, especially for sensitive data. 2. **Local LLM Option**: Offer an option to use a local LLM or a more privacy-preserving method if possible. 3. **Data Sanitization/Redaction**: Implement mechanisms to redact sensitive patterns (e.g., API keys, PII) from the diff before sending it to the LLM, though this is difficult to do comprehensively. 4. **Policy Link**: Provide a link to OpenAI's data usage policy for API data. | LLM | src/index.ts:23 | |
| HIGH | User-controlled git diff used in LLM prompt The `generateStashName` function directly incorporates the user's git diff (staged and unstaged changes) into the `user` message of the OpenAI API call. A malicious actor could embed prompt injection instructions within their code changes (e.g., in comments, string literals, or even commit messages that appear in the diff) to manipulate the LLM's behavior. This could cause the LLM to generate an unexpected or harmful stash name, potentially misleading the user or causing unintended side effects. Although the system prompt attempts to constrain the output, LLMs are known to be susceptible to such attacks. 1. **Input Sanitization/Filtering**: Implement robust sanitization or filtering of the `diff` content to remove or neutralize potential prompt injection attempts before sending it to the LLM. 2. **Output Validation**: Strictly validate the LLM's output (`name`) to ensure it conforms to expected patterns for a git stash message (e.g., length, character set, absence of suspicious keywords). 3. **LLM Guardrails**: Explore using LLM-specific guardrails or input/output moderation APIs if available, to detect and block malicious prompts or responses. 4. **Clear Warnings**: Inform users about the prompt injection risk and advise against including sensitive instructions in their code changes if they use this tool. | LLM | src/index.ts:23 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/name-gen/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/b8cad7d3bc8c38f1)
Powered by SkillShield