Trust Assessment
streme-launcher received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Direct use of PRIVATE_KEY environment variable for blockchain operations, Arbitrary file read and upload to external services via command-line argument.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary file read and upload to external services via command-line argument The `scripts/upload-image.ts` script takes a `filePath` directly from `process.argv` without validation. It then uses `fs.readFileSync(filePath)` to read the content of this file and subsequently uploads it to an external image hosting service (Pinata, Cloudinary, or imgBB) via `fetch`. An attacker who can manipulate the command-line arguments passed to this script (e.g., through prompt injection into the host LLM that invokes this skill) could instruct the script to read and exfiltrate arbitrary files from the agent's filesystem (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, configuration files containing secrets). Implement strict input validation for `filePath`. The path should be canonicalized, checked against an allowlist of permitted directories, and ensured not to contain directory traversal sequences (e.g., `../`). Ideally, the skill should only operate on files explicitly created or designated for it within a secure sandbox, or the LLM should not have direct filesystem access for arbitrary paths. | LLM | scripts/upload-image.ts:109 | |
| HIGH | Direct use of PRIVATE_KEY environment variable for blockchain operations The `scripts/deploy-token.ts` script explicitly reads a blockchain private key from the `PRIVATE_KEY` environment variable and uses it to initialize a wallet for deploying smart contracts. While this is a common pattern for deployment scripts, it represents a significant security risk in an AI agent context. If the agent's execution environment is compromised, or if the agent is tricked into exposing its environment variables, this private key could be exfiltrated, leading to loss of funds. The skill itself requires this sensitive credential for its core functionality. For an AI agent skill, direct handling of private keys should be avoided. If direct execution is necessary, ensure the execution environment is highly isolated and secured. Consider using external key management services (KMS) or requiring user confirmation for sensitive transactions, rather than having the agent directly access the private key. The LLM should be strictly forbidden from accessing or logging this variable. | LLM | scripts/deploy-token.ts:70 |
Scan History
Embed Code
[](https://skillshield.io/report/fd40469dcaf80c3c)
Powered by SkillShield