Trust Assessment
trawl received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 16 findings: 0 critical, 9 high, 7 medium, and 0 low severity. Key findings include Sensitive environment variable access: $HOME, Sensitive path access: AI agent config, Unsanitized user input in jq filter leads to command injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Static Code Analysis layer scored lowest at 5/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings16
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/audsmith28/trawl/scripts/setup.sh:48 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/audsmith28/trawl/scripts/setup.sh:51 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/audsmith28/trawl/scripts/setup.sh:58 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/audsmith28/trawl/scripts/sweep.sh:54 | |
| HIGH | Unsanitized user input in jq filter leads to command injection The `STATE_FILTER` and `CAT_FILTER` variables are directly interpolated into `jq` filter strings without sanitization. A malicious user providing crafted input (e.g., `--state '"foo" | . + system("evil_command")'`) could inject arbitrary `jq` expressions, including the `system()` function, leading to arbitrary command execution on the host system. Pass user-controlled filter values to `jq` using `--arg` or `--argjson` to prevent injection, or rigorously sanitize input to ensure it only contains expected values. | LLM | scripts/leads.sh:30 | |
| HIGH | Unsanitized user input in jq filter leads to command injection The `CATEGORY_FILTER` and `STATE_FILTER` variables are directly interpolated into `jq` filter strings within the `build_filter` function without sanitization. A malicious user providing crafted input (e.g., `--category '"foo" | . + system("evil_command")'`) could inject arbitrary `jq` expressions, including the `system()` function, leading to arbitrary command execution on the host system. Pass user-controlled filter values to `jq` using `--arg` or `--argjson` to prevent injection, or rigorously sanitize input to ensure it only contains expected values. | LLM | scripts/report.sh:58 | |
| HIGH | MoltBook API key exfiltration via configurable API base URL The `MOLTBOOK_API_KEY` is used in `curl` commands to authenticate with the MoltBook API. The base URL for these API calls (`API_BASE`) is read directly from `config.json`. If a malicious actor can modify `config.json` (e.g., via prompt injection into the host LLM that writes to files), they could set `API_BASE` to an attacker-controlled server. This would cause the `MOLTBOOK_API_KEY` to be sent to the malicious endpoint, leading to credential harvesting and data exfiltration. Implement strict validation for `API_BASE` to ensure it points only to trusted MoltBook domains. Additionally, restrict write access to `config.json` from untrusted sources or LLM interactions. | LLM | scripts/leads.sh:96 | |
| HIGH | MoltBook API key exfiltration via configurable API base URL The `MOLTBOOK_API_KEY` is used in `curl` commands to authenticate with the MoltBook API. The base URL for these API calls (`API_BASE`) is read directly from `config.json`. If a malicious actor can modify `config.json` (e.g., via prompt injection into the host LLM that writes to files), they could set `API_BASE` to an attacker-controlled server. This would cause the `MOLTBOOK_API_KEY` to be sent to the malicious endpoint, leading to credential harvesting and data exfiltration. Implement strict validation for `API_BASE` to ensure it points only to trusted MoltBook domains. Additionally, restrict write access to `config.json` from untrusted sources or LLM interactions. | LLM | scripts/qualify.sh:30 | |
| HIGH | MoltBook API key exfiltration via configurable API base URL The `MOLTBOOK_API_KEY` is used in `curl` commands to authenticate with the MoltBook API. The base URL for these API calls (`API_BASE`) is read directly from `config.json`. If a malicious actor can modify `config.json` (e.g., via prompt injection into the host LLM that writes to files), they could set `API_BASE` to an attacker-controlled server. This would cause the `MOLTBOOK_API_KEY` to be sent to the malicious endpoint, leading to credential harvesting and data exfiltration. Implement strict validation for `API_BASE` to ensure it points only to trusted MoltBook domains. Additionally, restrict write access to `config.json` from untrusted sources or LLM interactions. | LLM | scripts/sweep.sh:48 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/audsmith28/trawl/scripts/leads.sh:14 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/audsmith28/trawl/scripts/qualify.sh:10 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/audsmith28/trawl/scripts/report.sh:9 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/audsmith28/trawl/scripts/setup.sh:6 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/audsmith28/trawl/scripts/sweep.sh:9 | |
| MEDIUM | Configurable LLM prompt templates vulnerable to injection The skill constructs prompts for an external LLM using `intro_template` and `question_template` read directly from `config.json`. If a malicious actor can modify `config.json` (e.g., via prompt injection into the host LLM that writes to files), they could inject prompt injection instructions into these template fields. These instructions could manipulate the behavior of the external LLM, potentially leading to unintended actions, information disclosure, or generation of malicious content. Sanitize or validate the content of all user-configurable fields used in LLM prompts (e.g., `intro_template`, `question_template`) to remove or neutralize potential prompt injection instructions. Restrict write access to `config.json` from untrusted sources. | LLM | scripts/qualify.sh:160 | |
| MEDIUM | Configurable LLM prompt components vulnerable to injection The skill constructs prompts for an external LLM using `identity.description`, `signals[].description`, and `signals[].query` read directly from `config.json`. If a malicious actor can modify `config.json` (e.g., via prompt injection into the host LLM that writes to files), they could inject prompt injection instructions into these fields. These instructions could manipulate the behavior of the external LLM, potentially leading to unintended actions, information disclosure, or generation of malicious content. Sanitize or validate the content of all user-configurable fields used in LLM prompts (e.g., `identity.description`, `signals[].description`, `signals[].query`) to remove or neutralize potential prompt injection instructions. Restrict write access to `config.json` from untrusted sources. | LLM | scripts/sweep.sh:166 |
Scan History
Embed Code
[](https://skillshield.io/report/6f932b450579daf2)
Powered by SkillShield