Trust Assessment
systematic-debugging received a trust score of 82/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Command Injection via user-controlled argument to 'ls', Potential Command Injection via 'npm test' with user-controlled arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 11, 2026 (commit 6d52fe32). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Command Injection via user-controlled argument to 'ls' The `find-polluter.sh` script directly uses the first command-line argument (`$1`) as part of an `ls -la` command without proper sanitization or escaping. If an untrusted user provides a malicious string containing shell metacharacters (e.g., `'; rm -rf /'`), these commands will be executed in the shell context, leading to arbitrary command injection. Sanitize or escape user-provided input (`$1`) before using it in shell commands. For `ls`, ensure the argument is a valid path and does not contain shell metacharacters. Consider using `printf %q` for robust quoting if the argument must be passed to another shell, or validate the input more strictly. | LLM | find-polluter.sh:45 | |
| MEDIUM | Potential Command Injection via 'npm test' with user-controlled arguments The `find-polluter.sh` script executes `npm test` with a file path (`$TEST_FILE`) derived from user-provided input (`$2`, `TEST_PATTERN`). While `$TEST_FILE` is quoted, if the underlying `npm test` command or its configured test runner is vulnerable to argument injection (e.g., by interpreting specific strings in the file path as command-line options like `--config` or `--require`), an attacker could craft a file name or pattern to execute arbitrary code. This risk is dependent on the specific `npm test` configuration and test runner implementation. Ensure that `npm test` and its underlying test runner are configured to prevent argument injection from file paths. If possible, pass file paths to `npm test` using options specifically designed for paths (e.g., `--testPathPattern`) rather than relying on direct argument parsing. Implement strict validation for the `TEST_PATTERN` argument to prevent malicious patterns. | LLM | find-polluter.sh:35 |
Scan History
Embed Code
[](https://skillshield.io/report/6f456df2e334a69d)
Powered by SkillShield