Security Audit
Jamkris/everything-gemini-code:skills/configure-ecc
github.com/Jamkris/everything-gemini-codeTrust Assessment
Jamkris/everything-gemini-code:skills/configure-ecc received a trust score of 36/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via user-provided ECC_ROOT path, Potential Command/Prompt Injection during file optimization, Untrusted source for skill/rule installation via user-provided path.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on March 30, 2026 (commit 6c6f43aa). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via user-provided ECC_ROOT path In 'Step 0: Clone ECC Repository', if the initial `git clone` fails, the user is prompted to 'provide a local path to an existing ECC clone'. This user-provided path is then implicitly used as `ECC_ROOT` for subsequent `cp -r` commands in 'Step 2: Select & Install Skills' and 'Step 3: Select & Install Rules'. If an attacker provides a malicious path containing shell metacharacters (e.g., `$(malicious_command)` or `; malicious_command`), these commands could be executed when `cp -r` is called, leading to arbitrary command execution. Sanitize or validate user-provided paths before using them in shell commands. Ensure paths do not contain shell metacharacters. Alternatively, use a safer file copy mechanism that does not involve direct shell command execution with unsanitized user input. | Static | SKILL.md:79 | |
| HIGH | Potential Command/Prompt Injection during file optimization In 'Step 5: Optimize Installed Files', the skill instructs the agent to 'Read each installed SKILL.md' and 'Edit the SKILL.md files in-place' based on user input (e.g., 'tech stack', 'preferences'). The method of editing is not specified. If the agent uses shell commands (e.g., `sed`, `awk`) for these edits, and user-provided input is directly interpolated, it creates a command injection vulnerability. Furthermore, if the agent uses its own LLM capabilities to perform the edits, and the content of the `SKILL.md` or rule files (which are copied from `ECC_ROOT`, potentially user-provided or from a compromised source) contains prompt injection attempts, the LLM could be manipulated into performing unintended actions during the optimization process. If using shell commands for editing: Strictly sanitize all user input and content being processed before interpolation into shell commands. Consider using safer file manipulation libraries instead of direct shell execution. If using LLM capabilities for editing: Implement robust input validation and sandboxing for the content being processed. Treat the content of `SKILL.md` and rule files as untrusted input during the optimization phase, especially if `ECC_ROOT` was user-provided. Ensure the LLM's editing capabilities are constrained and do not allow arbitrary code execution or data exfiltration. | LLM | SKILL.md:179 | |
| MEDIUM | Untrusted source for skill/rule installation via user-provided path In 'Step 0: Clone ECC Repository', if the initial `git clone` fails, the user is asked to provide a local path to an existing ECC clone. The skill then proceeds to copy files from this user-provided `ECC_ROOT` into the user's `.gemini` directories. There is no validation or integrity check on the content of this user-provided local path. An attacker could provide a path to a malicious local repository, leading to the installation of compromised skills and rules into the user's environment. If allowing user-provided local paths, implement integrity checks (e.g., checksum verification against known good hashes, or requiring the path to be within a trusted directory). Consider limiting the scope of user-provided paths to prevent arbitrary file system access. Explicitly warn the user about the risks of using untrusted local sources. | Static | SKILL.md:35 |
Scan History
Embed Code
[](https://skillshield.io/report/62faf581f24e1082)
Powered by SkillShield