Trust Assessment
ddg-search received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Untrusted external API output used in LLM response.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Untrusted external API output used in LLM response The skill fetches text content from the DuckDuckGo API (`api.duckduckgo.com`) and directly outputs it using `echo -e`. If the host LLM processes this output as part of its conversational context or as further instructions, a malicious or compromised DuckDuckGo API response could inject prompts or manipulate the LLM's behavior. The use of `echo -e` further allows interpretation of backslash escapes, which could be used for terminal manipulation or more complex prompt injection if the LLM is sensitive to such sequences. Sanitize or filter all output received from external APIs before presenting it to the LLM or user. Specifically, remove or escape any characters that could be interpreted as instructions or special formatting by the consuming LLM or terminal. Consider using `printf %s "$output"` instead of `echo -e` to prevent interpretation of backslash escapes, and implement a robust sanitization layer for the content itself. | LLM | scripts/search.sh:67 |
Scan History
Embed Code
[](https://skillshield.io/report/70401c83c6734775)
Powered by SkillShield