Trust Assessment
xai-plus received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include User input directly embedded in remote LLM prompt, Arbitrary local file read via image attachment.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary local file read via image attachment The `chat.mjs` script allows users to specify local file paths via the `--image` flag. The script then reads the content of these files (`fs.readFileSync`) and sends them, base64 encoded, to the xAI API. If a malicious actor can control the `PATH` argument (e.g., through a crafted prompt to the host LLM), they could exfiltrate sensitive local files (e.g., `/etc/passwd`, API keys, SSH keys, or `~/.clawdbot/clawdbot.json`) by instructing the skill to read them and send their contents to the xAI service. Restrict file access to a predefined, secure directory (e.g., a temporary upload directory). Implement strict validation on file paths to prevent directory traversal (e.g., `../`, absolute paths outside allowed scope). If possible, use a file picker UI instead of direct path input to limit the scope of accessible files. | LLM | scripts/chat.mjs:76 | |
| HIGH | User input directly embedded in remote LLM prompt The skill constructs prompts for the xAI Grok LLM by directly embedding user-provided query strings without sufficient sanitization or separation. A malicious user could craft input to perform prompt injection against the xAI LLM, potentially leading to unintended actions, biased outputs, or attempts to extract information from the LLM's context. This compromises the integrity and reliability of the skill's interaction with the remote LLM. Implement robust input sanitization and validation for user-provided content before embedding it into LLM prompts. Consider using a separate, hardened prompt template for user input, or explicitly marking user content as 'user' role within the LLM's conversational context to prevent it from being interpreted as instructions. | LLM | scripts/analyze.mjs:140 |
Scan History
Embed Code
[](https://skillshield.io/report/ef41cfa08c0b764a)
Powered by SkillShield