Trust Assessment
search-x received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include User input directly embedded in downstream LLM prompt.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | User input directly embedded in downstream LLM prompt The `scripts/search.js` skill constructs a prompt for the xAI Grok model by directly interpolating user-provided query (`options.query`) into the `input` field of the API request. This allows a malicious user to inject instructions into the prompt, potentially manipulating the xAI model's behavior, causing it to ignore system instructions, generate unintended content, or attempt to extract information from the xAI model's context. While this is not a direct prompt injection against the host LLM orchestrating the skill, it is a significant vulnerability against the downstream LLM used by the skill. Implement robust input sanitization or escaping for user-provided queries before embedding them into LLM prompts. If the xAI API supports a structured query parameter for the `x_search` tool, prefer using that over embedding the query in natural language. Consider adding output filtering to validate responses from the xAI model to ensure they adhere to expected formats and content. | LLM | scripts/search.js:160 |
Scan History
Embed Code
[](https://skillshield.io/report/02f0c5d0a6452f21)
Powered by SkillShield