Trust Assessment
glm-coding-agent received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 14 findings: 6 critical, 4 high, 4 medium, and 0 low severity. Key findings include Persistence / self-modification instructions, File read + network send exfiltration, Sensitive environment variable access: $HOME.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings14
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Persistence / self-modification instructions Shell RC file modification for persistence Remove any persistence mechanisms. Skills should not modify system startup configurations, crontabs, LaunchAgents, systemd services, or shell profiles. | Manifest | skills/cgnl/glm-coding-agent/SKILL.md:145 | |
| CRITICAL | Persistence / self-modification instructions Shell RC file modification for persistence Remove any persistence mechanisms. Skills should not modify system startup configurations, crontabs, LaunchAgents, systemd services, or shell profiles. | Manifest | skills/cgnl/glm-coding-agent/SKILL.md:219 | |
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | skills/cgnl/glm-coding-agent/SKILL.md:117 | |
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | skills/cgnl/glm-coding-agent/SKILL.md:286 | |
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | skills/cgnl/glm-coding-agent/SKILL.md:430 | |
| CRITICAL | Claude Code Sandbox Disabled by `safe-glm` Wrapper The `safe-glm.sh` wrapper explicitly uses the `--dangerously-skip-permissions` flag when invoking the `claude` CLI. This disables the built-in OS-level sandboxing of Claude Code, which is designed to prevent unauthorized file system access (e.g., outside the project directory, to `~/.ssh`), and network restrictions. While the skill mentions a 'git safety net,' this only provides version control and rollback capabilities, not OS-level security against malicious code execution or data exfiltration. An LLM generating malicious code, or a prompt injection attack, could leverage this disabled sandbox to perform arbitrary actions on the host system with the user's permissions, bypassing the intended security controls of the `claude` tool. Remove the `--dangerously-skip-permissions` flag from `safe-glm.sh` to re-enable Claude Code's built-in sandboxing. If specific permissions are needed, configure them explicitly in `~/.claude/settings.json` with `allow` and `deny` rules, rather than disabling the entire sandbox. | LLM | SKILL.md:200 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/cgnl/glm-coding-agent/SKILL.md:117 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/cgnl/glm-coding-agent/SKILL.md:286 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/cgnl/glm-coding-agent/SKILL.md:430 | |
| HIGH | API Key Exposed to Unsandboxed `claude` Process The `glmcode.sh` script reads the Z.AI API key from `~/.openclaw/openclaw.json` and exports it as `ANTHROPIC_AUTH_TOKEN` for the `claude` process. Because the `safe-glm.sh` wrapper disables Claude Code's sandbox (via `--dangerously-skip-permissions`), the `claude` process runs with full access to environment variables and the file system. This significantly increases the risk of credential harvesting or data exfiltration. A malicious LLM response or prompt injection could instruct the unsandboxed `claude` process to read this API key from environment variables or memory, or other sensitive files, and exfiltrate it to an external server. Re-enable Claude Code's sandbox by removing `--dangerously-skip-permissions` from `safe-glm.sh`. Ensure that `claude` runs with the least necessary privileges and restricted network access. Consider alternative methods for passing credentials that do not expose them as environment variables to an unsandboxed process. | LLM | SKILL.md:200 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/cgnl/glm-coding-agent/SKILL.md:71 | |
| MEDIUM | Persistence mechanism: Shell RC file modification Detected Shell RC file modification pattern. Persistence mechanisms allow malware to survive system restarts. Review this persistence pattern. Skills should not modify system startup configuration. | Static | skills/cgnl/glm-coding-agent/SKILL.md:145 | |
| MEDIUM | Persistence mechanism: Shell RC file modification Detected Shell RC file modification pattern. Persistence mechanisms allow malware to survive system restarts. Review this persistence pattern. Skills should not modify system startup configuration. | Static | skills/cgnl/glm-coding-agent/SKILL.md:219 | |
| MEDIUM | Unsanitized User Input Passed to Shell Scripts The skill passes user-provided natural language prompts (e.g., "Add error handling") directly as arguments to `safe-glm.sh` or `safe-glm.ps1` via `command:"..."` in OpenClaw or directly on the command line. If these wrapper scripts do not properly sanitize or quote these arguments before using them in internal shell commands, a malicious user could craft a prompt that injects arbitrary shell commands. For example, a prompt like `'Fix bug; rm -rf /'` could potentially execute `rm -rf /` if not handled correctly by the wrapper script. Ensure that `safe-glm.sh` and `safe-glm.ps1` rigorously sanitize and properly quote all user-provided arguments before incorporating them into any internal shell commands. Use robust methods like `printf %q` in bash or `[System.Management.Automation.Command.EscapeArgument]` in PowerShell for escaping. | LLM | SKILL.md:30 |
Scan History
Embed Code
[](https://skillshield.io/report/47f8dd0f07497d86)
Powered by SkillShield