Trust Assessment
twitter-bookmark-sync received a trust score of 44/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 7 findings: 2 critical, 0 high, 3 medium, and 0 low severity. Key findings include Sensitive environment variable access: $HOME, Untrusted tweet content used in LLM-processed notifications, User-controlled config values embedded in LLM instructions for cron jobs.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings7
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted tweet content used in LLM-processed notifications The skill processes Twitter bookmark content (`tweet['text']`) which is user-generated and untrusted. This content is directly written to markdown files (`twitter-reading-YYYY-MM-DD.md` by `scripts/rank.py`) and included in notification messages (`MESSAGE` variable in `scripts/notify.sh`). If the host LLM (Clawdbot) processes these markdown files or notification messages (e.g., for summarization, sending via other tools, or displaying), a malicious tweet containing prompt injection instructions could manipulate the LLM. For example, the `notify.sh` script explicitly includes the `READING_LIST` content in an email body for the `gmail` channel, which would likely be processed by Clawdbot. Sanitize or filter untrusted `tweet['text']` before including it in any output that might be processed by an LLM. Specifically, remove or escape any characters or patterns that could be interpreted as instructions or markdown formatting by an LLM. Consider using a dedicated LLM-safe output format or a content filter. | LLM | scripts/notify.sh:60 | |
| CRITICAL | User-controlled config values embedded in LLM instructions for cron jobs The `install.sh` script constructs a natural language instruction for the host LLM (Clawdbot) to set up cron jobs. This instruction includes `$FETCH_TIME` and `$NOTIFY_TIME` which are read directly from `twitter-bookmark-sync-config.json`. If a malicious user modifies these time values in the config file to include shell commands (e.g., `00:00; rm -rf /`), and Clawdbot directly interprets the cron job instruction as a shell command to schedule, it could lead to arbitrary command execution. When constructing instructions for an LLM that include user-controlled variables, strictly validate and sanitize those variables to ensure they only contain expected values (e.g., time formats) and no unexpected characters or commands. Alternatively, use a more structured API for cron job creation if available, rather than natural language instructions. | LLM | install.sh:109 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/tunaissacoding/twitter-bookmark-sync/install.sh:8 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/tunaissacoding/twitter-bookmark-sync/scripts/notify.sh:7 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/tunaissacoding/twitter-bookmark-sync/scripts/sync.sh:7 | |
| INFO | Sensitive API credentials stored in plaintext config file The skill requires users to manually extract `auth_token` and `ct0` (Twitter authentication cookies) and store them in plaintext within `~/.config/bird/config.json5`. While the skill itself does not exfiltrate these credentials, their storage in a predictable plaintext file makes them a potential target for other malicious skills or processes on the system. This is a common pattern for CLI tools but represents a security consideration for sensitive data. Consider using a more secure method for storing credentials, such as OS-level secret management (e.g., macOS Keychain, environment variables for short-lived tokens) or an encrypted configuration file, if supported by the `bird` CLI or a wrapper. | LLM | SKILL.md:36 | |
| INFO | Placeholder for interaction with unvetted 'gog' skill The `scripts/notify.sh` script includes a placeholder for sending notifications via Gmail using a 'gog skill'. If this functionality were to be implemented, it would introduce a dependency on an external skill (`gog`) which would need its own security vetting. The interaction could involve passing sensitive data (the entire reading list) to `gog`, potentially granting `gog` excessive permissions or exposing data to an unvetted component. If implementing interaction with other skills, ensure they are thoroughly vetted for security. Clearly define the scope of data shared and permissions granted to external skills. Implement robust input validation and sanitization for any data passed between skills. | LLM | scripts/notify.sh:93 |
Scan History
Embed Code
[](https://skillshield.io/report/865ca8951a61a45a)
Powered by SkillShield