Trust Assessment
glab-job received a trust score of 88/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via unsanitized user input in `glab` commands.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via unsanitized user input in `glab` commands The skill documentation provides examples of `glab` commands that take user-supplied arguments (e.g., `<job-id>`, `<ref>`, `<job>`). In the `claude_code` ecosystem, the LLM is expected to construct and execute these commands based on user input. If user input for these arguments is not properly sanitized before being incorporated into the shell command, an attacker could inject arbitrary shell commands. For example, providing `123; rm -rf /` as a `<job-id>` could lead to malicious execution. Implement robust input sanitization and validation for all user-provided arguments before constructing and executing shell commands. Consider using a safe command execution library that separates command arguments from the command itself (e.g., `subprocess.run(['glab', 'job', 'view', job_id])` instead of `subprocess.run(f'glab job view {job_id}', shell=True)`). Explicitly instruct the LLM on how to sanitize inputs or use a tool execution framework that handles this automatically. | LLM | SKILL.md:10 |
Scan History
Embed Code
[](https://skillshield.io/report/dfa44a0d2932cb82)
Powered by SkillShield