Trust Assessment
metro-bundler received a trust score of 88/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via Shell Command Examples.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via Shell Command Examples The skill provides numerous shell command examples (`npx`, `npm`, `rm`, `lsof`, `kill`, `curl`, `source-map-explorer`). If the host LLM is designed to execute these commands and allows user-controlled input to modify command arguments (e.g., port numbers, file paths, URLs, package names), it could lead to arbitrary command execution, file deletion, or information disclosure. For instance, if a user can specify the port for `lsof -ti:<port> | xargs kill -9`, they might inject malicious commands. Similarly, `rm -rf $TMPDIR/react-*` could be manipulated if `$TMPDIR` is user-controlled. The `curl` commands could be exploited for Server-Side Request Forgery (SSRF) or arbitrary file writes if the URL or output file path is user-controlled. 1. Sanitize and Validate Input: Ensure all user-provided arguments for shell commands are strictly validated and sanitized to prevent injection of malicious characters or commands. 2. Least Privilege: Execute commands with the minimum necessary permissions. 3. Avoid Direct Execution: If possible, use safer APIs or libraries that abstract shell execution and provide better input validation. 4. Sandboxing: Execute commands in a highly restricted sandboxed environment. 5. Explicitly warn users: If the skill is intended for direct execution, add warnings about modifying commands with untrusted input. | LLM | SKILL.md:245 |
Scan History
Embed Code
[](https://skillshield.io/report/60f8897c154e8324)
Powered by SkillShield