Trust Assessment
xfetch received a trust score of 76/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 3 findings: 0 critical, 1 high, 2 medium, and 0 low severity. Key findings include Unpinned dependency in install instructions, Sensitive credentials exposed on command line, Potential command injection via unsanitized user input.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unpinned dependency in install instructions The `npm` installation command for `xfetch-cli` in the skill's manifest does not specify a version. This means that `npm install -g xfetch-cli` will always fetch the latest version, which could introduce breaking changes or malicious code if the package maintainer's account is compromised or a malicious version is published. Pinning dependencies is crucial for supply chain security and reproducibility. Pin the dependency to a specific version (e.g., `"package": "xfetch-cli@1.2.3"`) or a version range (e.g., `"package": "xfetch-cli@^1.0.0"`) in the manifest's `install` section to ensure reproducible and secure installations. | LLM | SKILL.md | |
| MEDIUM | Sensitive credentials exposed on command line The skill's documentation instructs users to provide sensitive `auth_token` and `ct0` values directly as command-line arguments (e.g., `xfetch auth set --auth-token <token> --ct0 <token>`). Passing sensitive credentials this way can expose them in process lists (`ps aux`), shell history, or system logs, making them vulnerable to unauthorized access by other users or processes on the same system. Recommend using environment variables, secure configuration files, or interactive prompts for sensitive credentials instead of command-line arguments. If command-line arguments are necessary, advise users to clear shell history and be aware of process list exposure, and consider using a tool like `pass` or `keyring` for secure storage. | LLM | SKILL.md:17 | |
| MEDIUM | Potential command injection via unsanitized user input The skill demonstrates calling `xfetch` with user-provided arguments such as `@handle`, `<url-or-id>`, and quoted `"query"` strings. If the AI agent constructs these commands by directly concatenating untrusted user input without proper sanitization (e.g., escaping shell metacharacters), a malicious user could inject arbitrary shell commands. This risk is amplified if the `xfetch` tool itself does not adequately sanitize its arguments before internal processing or passing them to sub-processes. AI agents calling this skill must strictly sanitize and escape all user-provided input before constructing shell commands. For example, use a library function that properly escapes arguments for the target shell. The `xfetch` tool developers should also ensure robust input validation and sanitization to prevent command injection vulnerabilities within the tool itself. | LLM | SKILL.md:31 |
Scan History
Embed Code
[](https://skillshield.io/report/1ded7f21900ca916)
Powered by SkillShield