Trust Assessment
alby-bitcoin-payments-cli-skill received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Untrusted content attempts to instruct LLM, Potential command injection through CLI arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted content attempts to instruct LLM The untrusted `SKILL.md` contains direct instructions for the LLM, such as "Let the user know they can save their secret here." and "If it is not saved, you should ask the user if you would like to save it". These directives attempt to manipulate the LLM's behavior from untrusted input, which is a critical prompt injection vulnerability as per the analysis rules. Remove all direct instructions or directives for the LLM from the untrusted `SKILL.md` content. The skill description should only describe the skill's functionality, not how the LLM should interact with the user. | LLM | SKILL.md:18 | |
| HIGH | Potential command injection through CLI arguments The skill describes invoking `npx @getalby/cli [options] <command>`. If the `[options]` or `<command>` arguments are constructed directly from untrusted user input without proper sanitization or escaping, a malicious user could inject arbitrary shell commands. For example, providing `--connection-secret 'foo; evil_command'` or a command like `'pay-invoice; rm -rf /'` could lead to arbitrary code execution. When constructing shell commands based on user input, ensure all arguments are properly escaped or validated to prevent injection. Consider using a dedicated library for shell command construction or strictly whitelisting allowed commands and options. Avoid direct string concatenation of user input into shell commands. | LLM | SKILL.md:6 |
Scan History
Embed Code
[](https://skillshield.io/report/32cb12cf706d461f)
Powered by SkillShield