Trust Assessment
web-search received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Command Injection via User Query, Arbitrary File Write via User-Controlled Output Path, Unpinned Dependency in Installation Instructions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via User Query The skill instructs the LLM to construct shell commands using user-provided queries directly, e.g., `python scripts/search.py "<query>"`. If the user's input for `<query>` is not properly sanitized or shell-escaped before being embedded into the command string, a malicious user could inject arbitrary shell commands. For example, a query like `"; rm -rf /"` could lead to the execution of `rm -rf /` on the host system. The LLM should implement robust input sanitization and shell escaping for all user-provided arguments (especially the `<query>`) before constructing and executing shell commands. Consider using a library function that safely escapes arguments for the target shell, or pass arguments as a list to `subprocess.run` to avoid shell interpretation. | LLM | SKILL.md:50 | |
| HIGH | Arbitrary File Write via User-Controlled Output Path The skill describes an `--output <file-path>` option, allowing search results to be saved to a user-specified file. If the `<file-path>` is directly derived from user input without validation or sanitization, a malicious user could specify arbitrary file paths. This could lead to overwriting critical system files, writing to sensitive directories, or exfiltrating data by writing to publicly accessible locations. The LLM should validate and sanitize user-provided file paths for the `--output` option. Restrict output to a designated, sandboxed directory. Prevent directory traversal (`../`) and absolute paths. Consider using a temporary file mechanism or requiring explicit user confirmation for file writes outside a safe zone. | LLM | SKILL.md:190 | |
| MEDIUM | Unpinned Dependency in Installation Instructions The skill instructs users to install `duckduckgo-search` using `pip install duckduckgo-search`. This dependency is not pinned to a specific version. This introduces a supply chain risk, as future versions of the package could introduce breaking changes, vulnerabilities, or even malicious code without explicit review. Pin the dependency to a specific, known-good version (e.g., `pip install duckduckgo-search==X.Y.Z`). Regularly review and update pinned dependencies to incorporate security fixes. | LLM | SKILL.md:40 |
Scan History
Embed Code
[](https://skillshield.io/report/7a192eb37736cc30)
Powered by SkillShield