Trust Assessment
newsapi-search received a trust score of 90/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via User-Controlled Arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via User-Controlled Arguments The skill's `SKILL.md` documentation demonstrates command-line execution of Node.js scripts (`scripts/search.js`, `scripts/sources.js`) where user-provided input (e.g., search queries, domains, sources) is passed directly as arguments. If the underlying Node.js scripts do not properly sanitize or escape these arguments before using them in shell commands (e.g., via `child_process.exec` or `spawn` with `shell: true`), an attacker could inject arbitrary shell commands. For example, a malicious query like `"technology; rm -rf /"` could lead to arbitrary code execution. The Node.js scripts (`scripts/search.js`, `scripts/sources.js`) must ensure that all user-provided arguments are properly sanitized and escaped before being used in any shell command execution. Prefer using `child_process.spawn` with an array of arguments over `child_process.exec` to avoid shell interpretation, or ensure robust input validation and escaping if `exec` is necessary. | LLM | SKILL.md:47 |
Scan History
Embed Code
[](https://skillshield.io/report/725756c4f81dde3a)
Powered by SkillShield