Trust Assessment
pokemon-red received a trust score of 70/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Unpinned Python Dependencies, Potential Command Injection via Environment Variable, Potential Path Traversal via User-Controlled Input to Local Server.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unpinned Python Dependencies The skill instructs the user to install Python packages using `pip install` without specifying exact versions. This introduces a supply chain risk, as future malicious updates to any of these packages (or their transitive dependencies) could compromise the system without explicit user consent or review. It also leads to non-deterministic builds. Pin all Python dependencies to exact versions (e.g., `pyboy==1.0.6`). Use a `requirements.txt` file with pinned versions and instruct the user to install with `pip install -r requirements.txt`. | LLM | SKILL.md:16 | |
| HIGH | Potential Command Injection via Environment Variable The skill instructs the user to execute a Python script using an environment variable, `$POKEMON_DIR`, which is set by the user. If an LLM agent is acting as the user and does not properly sanitize or validate the value of `POKEMON_DIR` before execution, a malicious value (e.g., `$(rm -rf /)`) could lead to arbitrary command execution. Instruct the user to use an absolute path directly in the command or provide a safer mechanism for setting the directory, such as a configuration file or a dedicated script that validates the path. If `POKEMON_DIR` must be an environment variable, explicitly warn about injection risks and advise sanitization. | LLM | SKILL.md:22 | |
| MEDIUM | Potential Path Traversal via User-Controlled Input to Local Server The skill instructs the user to send a POST request to a local server endpoint (`/api/command`) with a user-controlled `name` parameter for saving game state. If the `emulator_server.py` script does not properly sanitize this `name` parameter before using it to construct a file path, it could be vulnerable to path traversal (e.g., `../../../../etc/passwd`), allowing an attacker to write files to arbitrary locations on the filesystem. The `emulator_server.py` script must strictly validate and sanitize the `name` parameter to prevent path traversal. It should ensure the name only contains allowed characters and does not contain path separators (e.g., `/`, `\`, `..`). Consider storing saves in a dedicated, isolated directory. | LLM | SKILL.md:80 |
Scan History
Embed Code
[](https://skillshield.io/report/f9189b9ae87e5e5c)
Powered by SkillShield