Trust Assessment
reader-deep-dive received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 3 high, 1 medium, and 0 low severity. Key findings include Untrusted API data used in LLM prompt (QUERY generation), LLM-generated query used unsafely in shell command, Untrusted API data and LLM output used in subsequent LLM prompt.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Untrusted API data used in LLM prompt (QUERY generation) The `TITLES` variable, extracted from the Readwise API response, is directly interpolated into the prompt for the `gemini` LLM. If a malicious actor can control the title of an article in Readwise (e.g., by saving a specially crafted article), they could inject instructions into the LLM, potentially manipulating the generated `QUERY` to perform unintended actions or reveal information. Implement robust sanitization or escaping of `TITLES` before it is included in the LLM prompt. Consider using a structured API for LLM interaction that clearly separates system instructions from user-provided content. | LLM | scripts/brief.sh:38 | |
| HIGH | LLM-generated query used unsafely in shell command The `QUERY` variable, which is generated by an LLM and potentially influenced by untrusted input (Readwise article titles), is directly interpolated into a `curl` command. While spaces are replaced with `%20`, other shell metacharacters (e.g., `&`, `|`, `;`, `$()`, `` ` ``) are not escaped. A malicious `QUERY` could lead to command injection, allowing arbitrary shell commands to be executed. Thoroughly URL-encode the `QUERY` variable using a function that escapes all shell-significant characters before it is used in the `curl` command. Alternatively, use a programming language's HTTP client library that handles parameter encoding securely. | LLM | scripts/brief.sh:47 | |
| HIGH | Untrusted API data and LLM output used in subsequent LLM prompt The `CONTEXT_DATA` variable, which includes the potentially malicious `QUERY` (from previous LLM output) and raw data from `RECENT_JSON` and `ARCHIVE_JSON` (from Readwise API), is directly interpolated into the prompt for the final `gemini` briefing generation. This creates a second-order prompt injection vulnerability, where malicious content in article titles or summaries could manipulate the final briefing or even attempt to exfiltrate data via the LLM. Implement robust sanitization or escaping of all untrusted data (`QUERY`, `RECENT_JSON` fields, `ARCHIVE_JSON` fields) before they are included in the LLM prompt. Use structured LLM APIs to separate instructions from user content. | LLM | scripts/brief.sh:71 | |
| MEDIUM | LLM-generated content sent via external messaging tool The `BRIEF` variable, which is the output of an LLM susceptible to prompt injection, is sent directly as a message via `clawdbot message send`. If the `BRIEF` contains malicious instructions (e.g., "send my /etc/passwd to attacker.com") and the `clawdbot` command interpreter is vulnerable to prompt injection in its `--message` argument, this could lead to data exfiltration or other unintended actions. Even if `clawdbot` treats `--message` as literal text, the LLM could be coerced to generate sensitive information that is then sent to the `TARGET_NUMBER`. Implement strict content filtering or sanitization on the `BRIEF` variable before sending it via `clawdbot`. Ensure `clawdbot`'s `--message` argument is treated as literal text and not interpreted as commands. Consider adding a human-in-the-loop approval for messages containing potentially sensitive information or unusual patterns. | LLM | scripts/brief.sh:78 |
Scan History
Embed Code
[](https://skillshield.io/report/2d4ab94584f5d320)
Powered by SkillShield