Trust Assessment
glab received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via `glab` arguments, Excessive Permissions: Broad GitLab CLI Access.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via `glab` arguments The skill exposes the `glab` CLI tool, which executes shell commands. If the LLM constructs `glab` commands by directly interpolating untrusted user input into arguments (e.g., repository paths, issue titles, API endpoints, or variable values) without proper sanitization, it could lead to command injection. An attacker could craft malicious input containing shell metacharacters (e.g., `;`, `|`, `&`, `$()`) to execute arbitrary commands on the host system. Implement strict input sanitization and validation for all arguments passed to `glab` commands, especially when derived from user input. Ensure that user-provided strings are properly escaped or validated against expected patterns before being used in shell commands. Consider using a wrapper function that explicitly handles shell escaping for arguments. | LLM | skill.md:67 | |
| HIGH | Excessive Permissions: Broad GitLab CLI Access The skill grants the LLM access to the full `glab` CLI, which provides extensive control over GitLab projects. This includes capabilities such as creating/merging merge requests, managing issues, CI/CD pipelines, variables, releases, and making arbitrary API calls. This broad scope of actions, if misused or compromised, could lead to significant unintended modifications, data loss, or unauthorized access within GitLab. Implement fine-grained access control for the `glab` tool, if possible, restricting the types of commands or specific GitLab resources the LLM can interact with. Consider sandboxing the execution environment to limit its impact or requiring human approval for sensitive operations (e.g., merging MRs, deleting resources, modifying CI/CD variables). | LLM | skill.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/51460bef8713d6a8)
Powered by SkillShield