Trust Assessment
alias-gen received a trust score of 28/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 6 findings: 2 critical, 2 high, 2 medium, and 0 low severity. Key findings include Persistence / self-modification instructions, Unpinned npm dependency version, Shell History Exfiltration to Third-Party LLM.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Persistence / self-modification instructions Shell RC file modification for persistence Remove any persistence mechanisms. Skills should not modify system startup configurations, crontabs, LaunchAgents, systemd services, or shell profiles. | Manifest | skills/lxgicstudios/alias-gen/src/index.ts:15 | |
| CRITICAL | Shell History Exfiltration to Third-Party LLM The skill reads the user's shell command history (e.g., from `process.env.HISTFILE` or `~/.zsh_history`) and sends the last 500 lines directly to the OpenAI API. Shell history can contain highly sensitive information, including private file paths, internal commands, and potentially credentials or API keys used in commands. Sending this data to an external LLM service constitutes a severe data exfiltration risk, as the user's private command history is transmitted to a third party without explicit, granular consent or redaction. Implement a robust sanitization and redaction process for shell history before sending it to an LLM. This should include redacting known sensitive patterns (e.g., API keys, passwords, private file paths). Provide clear user consent mechanisms, allowing users to review and approve the history content, or offer options to exclude sensitive commands/patterns. Consider local processing if possible, or using a privacy-preserving LLM. At minimum, explicitly warn the user about this data transmission and obtain informed consent. | LLM | src/index.ts:9 | |
| HIGH | Potential Credential Exposure from Shell History As a specific instance of data exfiltration, the skill sends the user's shell history to OpenAI. This history is highly likely to contain commands that include sensitive credentials such as API keys (e.g., `export OPENAI_API_KEY=sk-...`), database passwords (`mysql -u user -psecret`), SSH commands with private key paths (`ssh -i ~/.ssh/id_rsa`), or other secrets. Exposing these credentials to a third-party LLM service poses a significant security risk, as they could be logged or misused by the LLM provider or an attacker if the LLM service is compromised. Implement strict redaction of known credential patterns (e.g., API keys, common password flags, private key paths) from the shell history before sending it to the LLM. This requires careful pattern matching and potentially user-configurable redaction rules. Educate users about the risks and provide clear options to prevent sensitive data from being sent. | LLM | src/index.ts:9 | |
| HIGH | Untrusted Shell History Used in LLM Prompt The skill constructs an LLM prompt by directly embedding the user's shell history (`userContent = `Analyze this shell history and suggest aliases:\n\n${history}``). If a malicious actor can inject specific commands or instructions into the user's shell history (e.g., by compromising a system or tricking a user into running a crafted command), these injected strings could be interpreted by the LLM as instructions, leading to prompt injection attacks. This could manipulate the LLM's behavior, cause it to reveal parts of its system prompt, generate malicious aliases, or output sensitive information from the history that it was not intended to reveal. Implement robust input sanitization or a prompt templating strategy that clearly separates user-provided data from system instructions. Explicitly instruct the LLM to treat the `history` content as raw data for analysis only, and to ignore any embedded instructions or commands within it. Consider using LLM features designed for structured input or function calling to reduce the risk of prompt injection. | LLM | src/index.ts:12 | |
| MEDIUM | Unpinned npm dependency version Dependency 'commander' is not pinned to an exact version ('^12.1.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/lxgicstudios/alias-gen/package.json | |
| MEDIUM | Unpinned Dependencies in package.json The `package.json` file uses caret (`^`) ranges for its dependencies (`commander`, `openai`, `ora`, `typescript`, `@types/node`). This allows `npm install` (or `npx` in some contexts) to automatically update to newer minor or patch versions. While `package-lock.json` pins exact versions, relying on `npx` without a guaranteed lockfile adherence introduces a supply chain risk. A malicious update to any of these dependencies could be pulled in without explicit review, potentially leading to the execution of compromised code. Pin all dependencies to exact versions (e.g., `12.1.0` instead of `^12.1.0`) to ensure deterministic builds and prevent unexpected updates. Regularly audit and manually update dependencies to incorporate security fixes. For `npx` usage, consider bundling the application or using a tool that guarantees lockfile adherence to ensure the exact versions specified in `package-lock.json` are always used. | LLM | package.json:9 |
Scan History
Embed Code
[](https://skillshield.io/report/ed4477b30bb5ec62)
Powered by SkillShield