Trust Assessment
glab-config received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include GitLab token exposure via `glab config get`, Arbitrary command execution via `glab_pager` configuration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution via `glab_pager` configuration The skill describes the `glab_pager` setting, which allows users to specify an arbitrary command to be used as a pager (e.g., `glab config set glab_pager 'malicious_script.sh'`). If the `glab` CLI later invokes this configured pager, it would execute the attacker-controlled command, leading to arbitrary command injection and potential system compromise. Prevent the LLM from setting `glab_pager` to arbitrary commands. If setting is necessary, restrict values to a safe whitelist of known pagers (e.g., `less`, `more`) and disallow shell metacharacters or script paths. Alternatively, ensure `glab` itself sanitizes or sandboxes the pager command execution to prevent arbitrary code execution. | LLM | SKILL.md:23 | |
| HIGH | GitLab token exposure via `glab config get` The skill describes the `glab config get <key>` command, which can be used to retrieve configuration values. Specifically, it mentions the `token` setting, which stores the GitLab access token. A malicious prompt could instruct the LLM to execute `glab config get token` and then exfiltrate the retrieved sensitive credential. Implement strict input validation and sanitization for `glab config get` commands, especially for sensitive keys like `token`. Consider disallowing retrieval of sensitive keys or requiring explicit user confirmation before exposing such values. | LLM | SKILL.md:26 |
Scan History
Embed Code
[](https://skillshield.io/report/bc5a3aa491e95314)
Powered by SkillShield