Trust Assessment
xai-search received a trust score of 36/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 5 findings: 0 critical, 4 high, 1 medium, and 0 low severity. Key findings include Sensitive path access: AI agent config, Direct User Input to LLM (Prompt Injection), Unpinned Dependency in Skill Requirements.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Static Code Analysis layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/aydencook03/xai-search/SKILL.md:60 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/aydencook03/xai-search/SKILL.md:63 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/aydencook03/xai-search/SKILL.md:66 | |
| HIGH | Direct User Input to LLM (Prompt Injection) The `query` argument, which is taken directly from user command-line input (`sys.argv[2:]`), is passed without sanitization or validation as a user message to the xAI Grok LLM via `chat.append(user(query))`. This allows an attacker to craft malicious input in the query to attempt prompt injection attacks against the Grok LLM, potentially manipulating its responses, extracting unintended information, or causing it to perform actions beyond its intended scope. While this targets the external Grok LLM rather than the host LLM running the skill, it represents a direct path for untrusted user input to influence an LLM's behavior within the skill's execution. Implement input sanitization, validation, or a robust prompt templating system to separate user input from system instructions before passing the `query` to the LLM. Consider adding a warning to users about the direct pass-through of their query to the LLM. | LLM | scripts/xai-search.py:40 | |
| MEDIUM | Unpinned Dependency in Skill Requirements The `SKILL.md` documentation instructs users to install the `xai-sdk` package without specifying a version (`pip install xai-sdk`). The `scripts/xai-search.py` script also imports `xai_sdk` without any version constraint. This practice introduces a supply chain risk, as future updates to the `xai-sdk` package or its transitive dependencies could introduce breaking changes, vulnerabilities, or even malicious code if the package maintainer's account is compromised. Users installing or updating the skill would unknowingly receive the latest, potentially compromised, version. Pin the `xai-sdk` dependency to a specific, known-good version (e.g., `pip install xai-sdk==X.Y.Z`) in the `SKILL.md` and, if applicable, use a `requirements.txt` file for the skill to ensure deterministic installations. | LLM | SKILL.md:19 |
Scan History
Embed Code
[](https://skillshield.io/report/31638515dbb9418f)
Powered by SkillShield