Trust Assessment
veil received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 13 findings: 2 critical, 8 high, 1 medium, and 2 low severity. Key findings include Sensitive path access: AI agent config, Sensitive environment variable access: $HOME, User-controlled input directly sent to external LLM.
The analysis covered 4 layers: manifest_analysis, llm_behavioral_safety, dependency_graph, static_code_analysis. The static_code_analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 15, 2026 (commit 66de0a1e). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings13
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | User-controlled input directly sent to external LLM The `veil-bankr-prompt.sh` script takes user-controlled input (`$PROMPT`) and sends it directly to an external LLM service (`bankr.bot`) via the `bankr` CLI or a `curl` request. Although `jq` is used to escape the prompt for JSON, this does not prevent prompt injection attacks where malicious instructions are embedded within the user's input, aiming to manipulate the LLM's behavior. An attacker controlling the prompt can attempt to bypass safety mechanisms or extract sensitive information from the LLM's context. Implement robust input sanitization or a strict allow-list for prompts before sending them to the LLM. If arbitrary prompts are required, clearly document the prompt injection risk to the user and the host LLM, and ensure the LLM has strong guardrails against malicious instructions. | Unknown | scripts/veil-bankr-prompt.sh:20 | |
| CRITICAL | User-controlled input directly sent to external LLM The `veil-bankr-prompt.sh` script takes user-controlled input (`$PROMPT`) and sends it directly to an external LLM service (`bankr.bot`) via the `bankr` CLI or a `curl` request. Although `jq` is used to escape the prompt for JSON, this does not prevent prompt injection attacks where malicious instructions are embedded within the user's input, aiming to manipulate the LLM's behavior. An attacker controlling the prompt can attempt to bypass safety mechanisms or extract sensitive information from the LLM's context. Implement robust input sanitization or a strict allow-list for prompts before sending them to the LLM. If arbitrary prompts are required, clearly document the prompt injection risk to the user and the host LLM, and ensure the LLM has strong guardrails against malicious instructions. | Unknown | scripts/veil-bankr-prompt.sh:34 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Unknown | /tmp/skillscan-clone-6k_519s4/repo/veil/SKILL.md:16 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Unknown | /tmp/skillscan-clone-6k_519s4/repo/veil/SKILL.md:17 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Unknown | /tmp/skillscan-clone-6k_519s4/repo/veil/SKILL.md:43 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Unknown | /tmp/skillscan-clone-6k_519s4/repo/veil/SKILL.md:48 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Unknown | /tmp/skillscan-clone-6k_519s4/repo/veil/SKILL.md:49 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Unknown | /tmp/skillscan-clone-6k_519s4/repo/veil/SKILL.md:52 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.clawdbot/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Unknown | /tmp/skillscan-clone-6k_519s4/repo/veil/SKILL.md:105 | |
| HIGH | Transaction JSON appended to LLM prompt The `veil-bankr-submit-tx.sh` script constructs a prompt by appending user-provided transaction JSON (`$TX_JSON`) to a fixed instruction. This combined prompt is then sent to an external LLM via `veil-bankr-prompt.sh`. While the JSON structure is validated, the *content* within the transaction JSON (e.g., in the `data` field or other arbitrary fields) could be crafted by an attacker to include malicious instructions, potentially manipulating the LLM's interpretation or actions related to the transaction. Carefully review and sanitize or filter the content of transaction JSON fields before including them in an LLM prompt. Consider using a strict schema validation that disallows arbitrary text in sensitive fields if they are to be passed to an LLM. The fixed instruction 'do not change any fields' is insufficient to prevent LLM manipulation. | Unknown | scripts/veil-bankr-submit-tx.sh:21 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Unknown | /tmp/skillscan-clone-6k_519s4/repo/veil/scripts/_common.sh:5 | |
| LOW | Unquoted expansion of `$@` in `veil_cli` function The `veil_cli` function passes the `$@` variable unquoted to the `veil` or `node` command. This can lead to unintended word splitting and globbing. For example, an argument containing spaces might be treated as multiple arguments, or an argument containing shell wildcards (like `*` or `?`) might expand to filenames. While this does not directly enable arbitrary shell command execution in this context (as `veil` and `node` are binaries), it can lead to unexpected behavior, argument manipulation, or bypass of intended argument parsing, potentially causing the `veil` CLI to behave differently than expected. Always quote shell expansions, especially `$@`, as `"$@"` to preserve arguments as distinct units and prevent word splitting and globbing. This applies to both `veil "$@"` and `node "$SDK_REPO/dist/cli/index.cjs" "$@"`. | Unknown | scripts/_common.sh:54 | |
| LOW | Unquoted expansion of `$@` in `veil_cli` function The `veil_cli` function passes the `$@` variable unquoted to the `veil` or `node` command. This can lead to unintended word splitting and globbing. For example, an argument containing spaces might be treated as multiple arguments, or an argument containing shell wildcards (like `*` or `?`) might expand to filenames. While this does not directly enable arbitrary shell command execution in this context (as `veil` and `node` are binaries), it can lead to unexpected behavior, argument manipulation, or bypass of intended argument parsing, potentially causing the `veil` CLI to behave differently than expected. Always quote shell expansions, especially `$@`, as `"$@"` to preserve arguments as distinct units and prevent word splitting and globbing. This applies to both `veil "$@"` and `node "$SDK_REPO/dist/cli/index.cjs" "$@"`. | Unknown | scripts/_common.sh:56 |
Scan History
Embed Code
[](https://skillshield.io/report/dc4652e565847843)
Powered by SkillShield