Trust Assessment
songsee received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via `ffmpeg` dependency, Arbitrary File Read/Write via CLI Arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via `ffmpeg` dependency The skill description explicitly states that 'other formats use ffmpeg if available'. This indicates that the `songsee` CLI tool can invoke `ffmpeg` for audio decoding. If `songsee` passes user-controlled filenames or parameters directly to `ffmpeg` without proper sanitization, it could lead to command injection vulnerabilities. An attacker could craft a malicious filename or input that executes arbitrary shell commands via `ffmpeg`'s command-line arguments. Ensure that any user-provided input (e.g., filenames, paths, format strings) passed to `ffmpeg` or other external tools are thoroughly sanitized and validated. Use a robust escaping mechanism or a library that safely handles external command execution, preventing arbitrary command injection. | LLM | SKILL.md:15 | |
| MEDIUM | Arbitrary File Read/Write via CLI Arguments The `songsee` CLI examples demonstrate reading from arbitrary input files (e.g., `track.mp3`) and writing to arbitrary output files (e.g., `slice.jpg`, `out.png`) specified by the user. While `songsee` itself is a local tool, if an LLM is instructed to use this skill with untrusted input paths or output paths, it could be coerced into reading sensitive files from the system or writing malicious content to arbitrary locations, potentially overwriting critical system files or exfiltrating data by writing it to a publicly accessible location. Implement strict input validation and sandboxing for file paths and names provided to the `songsee` tool. Restrict file operations to a designated, isolated directory. Avoid allowing the LLM to specify arbitrary output paths, or ensure that output paths are within a secure, temporary, and non-sensitive location. For input, validate that paths are within expected directories and do not contain directory traversal sequences (e.g., `../`). | LLM | SKILL.md:5 |
Scan History
Embed Code
[](https://skillshield.io/report/33ed773b1b33ea94)
Powered by SkillShield