Trust Assessment
add-minimax-provider received a trust score of 88/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 0 high, 2 medium, and 0 low severity. Key findings include Sensitive environment variable access: $HOME, Potential Shell Command Execution via Example Code.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/jooey/add-minimax-provider/SKILL.md:272 | |
| MEDIUM | Potential Shell Command Execution via Example Code The skill document contains multiple shell commands (`curl`, `python3 -c`, `launchctl`, `openclaw doctor`, `tail`) presented as examples for testing and validation. In an `claude_code` ecosystem, an AI agent might attempt to execute these commands directly. This poses a risk if the agent's execution environment lacks proper sandboxing or user confirmation for shell commands. Specifically, the `curl` commands demonstrate the use of an API key, which could be exposed or misused if executed by an agent without proper handling of sensitive placeholders. 1. **Agent-side**: Implement strict sandboxing for any shell command execution. Require explicit user confirmation before executing commands, especially those involving system interaction (`launchctl`) or sensitive data (`API_KEY`). 2. **Skill-side**: If these commands are purely illustrative, consider wrapping them in code blocks that explicitly state they are for *manual user execution* and not for agent execution. For example, add a note like "DO NOT EXECUTE THIS COMMAND DIRECTLY AS AN AGENT. THIS IS FOR MANUAL USER TESTING." or use a different rendering for commands intended for agent execution vs. user execution. | LLM | SKILL.md:78 |
Scan History
Embed Code
[](https://skillshield.io/report/a6c1beaa737c4817)
Powered by SkillShield