Trust Assessment
stash-namer received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 2 medium, and 0 low severity. Key findings include Unpinned npm dependency version, Direct Prompt Injection via Git Diff, Sensitive Data Exfiltration via OpenAI API.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Direct Prompt Injection via Git Diff The `generateStashName` function sends the raw `git diff` output, which includes user-controlled code changes, directly as the `user` message to the OpenAI API. A malicious actor could embed prompt injection instructions within their code changes or commit messages, causing the LLM to ignore its system prompt and execute arbitrary instructions, potentially leading to information disclosure or unintended actions. Implement input sanitization or a robust input/output guardrail mechanism. Consider using a separate LLM call to summarize or sanitize the diff before passing it to the naming LLM. Alternatively, use a technique like 'sandwiching' (placing user input between system instructions and a final instruction) or XML/JSON tagging to better delineate user input from instructions. | LLM | src/index.ts:20 | |
| HIGH | Sensitive Data Exfiltration via OpenAI API The skill captures the full `git diff` (staged and unstaged changes) and transmits it to the OpenAI API for processing. This `diff` can contain sensitive information such as proprietary code, API keys, credentials, PII, or other confidential data present in the user's local repository changes. This data is then processed by a third-party LLM service, posing a significant data exfiltration risk. Implement strict data filtering or redaction on the `git diff` content before sending it to the LLM. Users should be explicitly warned about the data transmission and its implications. Consider local processing or using a privacy-preserving LLM if sensitive data cannot be avoided. Limit the amount of diff sent (e.g., only file names, or a very short summary). | LLM | src/index.ts:11 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/stash-namer/package.json | |
| MEDIUM | Git Option Injection via LLM-Generated Stash Name The LLM-generated stash name is directly used as an argument to `git stash push -m <name>`. If the LLM, due to prompt injection or adversarial input, generates a string containing valid `git stash push` options (e.g., `--include-untracked`, `--all`, `--patch`), these options could be interpreted by `git` as legitimate commands, leading to unintended modifications of the stash operation (e.g., stashing untracked files when not intended, or interactive patching). While `simple-git` typically handles arguments safely, the `git` command itself might parse options within the `-m` message if not properly escaped or quoted by `simple-git` for all edge cases. Sanitize the LLM-generated `name` to remove any leading hyphens or known git options before passing it to `git stash`. Alternatively, ensure `simple-git` explicitly quotes the message argument to prevent `git` from interpreting parts of it as options. A safer approach might be to use a fixed prefix/suffix or validate the output against a regex that disallows git options. | LLM | src/index.ts:32 |
Scan History
Embed Code
[](https://skillshield.io/report/4727d874c65bcec8)
Powered by SkillShield