Trust Assessment
lastfm received a trust score of 34/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 1 critical, 3 high, 0 medium, and 0 low severity. Key findings include File read + network send exfiltration, Sensitive path access: AI agent config, Potential Command Injection via unsanitized user input in curl parameters.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | skills/poiley/whatisxlistening-to/skills/lastfm/SKILL.md:8 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/poiley/whatisxlistening-to/skills/lastfm/SKILL.md:8 | |
| HIGH | Potential Command Injection via unsanitized user input in curl parameters The skill's documentation provides `curl` examples that include parameters like `artist`, `track`, `album`, and `tag`. If the skill's implementation constructs these `curl` commands by directly interpolating unsanitized user input into these parameters and then executes them via a shell (e.g., `os.system` or `subprocess.run` in Python), it would be vulnerable to command injection. A malicious user could inject arbitrary shell commands (e.g., `artist=foo%26%26rm%20-rf%20/`) that would be executed by the underlying system. Ensure that any user-provided input used to construct shell commands is strictly validated and sanitized. Prefer using HTTP client libraries that handle URL parameter encoding automatically (e.g., Python's `requests` library) instead of direct shell execution. If shell execution is unavoidable, use `subprocess.run` with `shell=False` and pass arguments as a list, or escape all user input using `shlex.quote`. | LLM | SKILL.md:60 | |
| HIGH | Potential Command Injection via unsanitized user input in jq filters The skill's documentation provides `jq` examples for processing JSON output. If the skill's implementation allows user input to define the `jq` filter string and then executes `jq` via a shell command (e.g., `os.system` or `subprocess.run`), it would be vulnerable to command injection. A malicious user could inject arbitrary shell commands within the `jq` filter string. Avoid constructing `jq` filter strings directly from unsanitized user input. If dynamic filtering is required, implement it programmatically within the skill's language (e.g., Python's `json` module) rather than relying on external shell tools with user-controlled arguments. If `jq` must be used, strictly validate and sanitize or whitelist allowed filter patterns, and use `subprocess.run` with `shell=False` and `shlex.quote` for any user-provided parts. | LLM | SKILL.md:295 |
Scan History
Embed Code
[](https://skillshield.io/report/ce26a492fef269a9)
Powered by SkillShield