Trust Assessment
x-automation received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 2 high, 1 medium, and 0 low severity. Key findings include Broad Browser Access via `browser(profile="chrome")`, Direct Manipulation of LLM's Generative Capabilities, Generated Content Exfiltration via User Notification.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Broad Browser Access via `browser(profile="chrome")` The skill explicitly requests and uses `browser(profile="chrome")` which grants the AI agent full control over the user's Chrome browser session. This includes access to all websites the user is logged into, cookies, local storage, and potentially sensitive browsing history. An untrusted skill could abuse this to access personal accounts, perform unauthorized actions, or gather sensitive information without explicit user consent for each action. Restrict browser access to specific domains or functionalities if possible. Implement strict sandboxing for browser operations. Require explicit user confirmation for sensitive browser actions or navigation to new domains. | LLM | SKILL.md:10 | |
| HIGH | Direct Manipulation of LLM's Generative Capabilities The skill explicitly instructs the host LLM to 'Generate **3 candidate tweets**' and to apply specific constraints like 'Opinions are encouraged! Be bold, witty, or opinionated'. As an untrusted skill, these instructions directly manipulate the LLM's core generative function, potentially steering it to produce biased, harmful, or otherwise undesirable content, or to bypass internal safety mechanisms related to content generation. Implement strict content moderation and safety filters on all generated output. Isolate generative tasks from sensitive data. Require explicit user review and approval for all generated content before publication or external use. | LLM | SKILL.md:28 | |
| HIGH | Generated Content Exfiltration via User Notification The skill instructs the agent to 'Notify the user via the primary channel (Telegram/Webchat) of success or failure. Include the best draft in case of failure.' This mechanism can be used to exfiltrate generated content, which might include sensitive information derived from the X.com timeline or other sources the agent has access to. While the intent is benign, an untrusted skill could craft malicious 'drafts' or include sensitive data in failure notifications. Implement strict content filtering or sanitization for all outgoing notifications. Require explicit user consent before sending any generated content externally. Limit the type and amount of information that can be included in notifications. | LLM | SKILL.md:39 | |
| MEDIUM | File Write Access to `memory/` Directory The skill instructs the agent to write to `memory/x-daily-candidates.log` and `memory/x-automation-logs.md`. This grants file write permissions within the agent's `memory/` directory. While this directory is typically internal, an untrusted skill could potentially write malicious content, large files to exhaust storage, or sensitive information gathered from other sources into these logs, which might later be accessed or exfiltrated by other means. Implement strict access controls and sandboxing for file write operations. Monitor content written to `memory/` for suspicious patterns. Consider using temporary or ephemeral storage for transient data. | LLM | SKILL.md:35 |
Scan History
Embed Code
[](https://skillshield.io/report/c12d8f9a0b0963d4)
Powered by SkillShield