Trust Assessment
coding-opencode received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 2 critical, 1 high, 1 medium, and 0 low severity. Key findings include Arbitrary command execution via `exec` tool and `wsl opencode`, Ability to operate Docker containers via PowerShell, Default working directory in Administrator's Documents folder.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 18/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution via `exec` tool and `wsl opencode` The skill explicitly states it uses the `exec` tool to call `opencode` commands, which are then executed via `wsl` (Windows Subsystem for Linux). User-provided input is directly incorporated into these commands as arguments. This creates a direct path for command injection, allowing an attacker to execute arbitrary shell commands on the host system by crafting malicious input. Avoid using `exec` with unsanitized user input. If `exec` is necessary, implement strict input validation and sanitization, or use a more constrained execution environment (e.g., a sandboxed container with minimal privileges) that does not allow arbitrary shell commands. Ensure the `opencode` tool itself is not vulnerable to command injection through its arguments. | LLM | SKILL.md:40 | |
| CRITICAL | Ability to operate Docker containers via PowerShell The skill explicitly states it has the capability to 'mengoperasikan Docker container via PowerShell'. This implies the ability to execute `docker` commands on the host system. This is a severe security risk, as it can lead to arbitrary code execution on the host (e.g., by running privileged containers, mounting host paths, or exploiting Docker daemon vulnerabilities), container escape, or resource exhaustion. This constitutes both excessive permissions and a command injection vector. Remove the ability to operate Docker containers directly from the skill. If containerization is required, use a highly restricted and sandboxed environment, or delegate container operations to a trusted, isolated service with strict access controls and no direct user input. | LLM | SKILL.md:58 | |
| HIGH | Default working directory in Administrator's Documents folder The skill specifies `C:\Users\Administrator\Documents\Jagonyakomputer` as the default working directory for all coding operations and file manipulations. This indicates that the skill operates with permissions that allow it to read, write, and modify files within a user's sensitive `Documents` folder, potentially even an `Administrator`'s. This broad access to a user's personal files is excessive and could lead to data exfiltration, data corruption, or the introduction of malicious code if combined with other vulnerabilities. Restrict the skill's file system access to a dedicated, isolated, and temporary working directory with minimal necessary permissions. Avoid operating directly within sensitive user directories like `Documents` or `Administrator`'s folders. | LLM | SKILL.md:54 | |
| MEDIUM | Configurable LLM prompts via JSON files The skill mentions that 'prompt' configurations can be changed in `.opencode/oh-my-opencode.json` and `~/.config/opencode/oh-my-opencode.json` files. If these configuration files are user-modifiable and their content is subsequently used as part of the LLM's prompt, it creates a vector for prompt injection. An attacker could modify these files to manipulate the LLM's behavior, bypass safety mechanisms, or extract sensitive information. Ensure that any configurable prompts loaded from user-modifiable files are treated as untrusted input. Implement strict sanitization and validation for these configurations before they are incorporated into the LLM's prompt. Ideally, critical system prompts should not be user-modifiable. | LLM | SKILL.md:67 |
Scan History
Embed Code
[](https://skillshield.io/report/385d86cf26737bf0)
Powered by SkillShield