Trust Assessment
glab-mr received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via script argument, Execution of `npm test` on untrusted MR content.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via script argument The skill documentation describes a workflow where a shell script (`scripts/mr-review-workflow.sh`) is executed with an arbitrary command passed as an argument (e.g., `"pnpm test"`). If the LLM agent constructs this argument from untrusted user input, it could lead to arbitrary command execution on the host system. An attacker could inject malicious commands instead of `pnpm test`. The `mr-review-workflow.sh` script should be reviewed to ensure it properly sanitizes or validates its arguments before execution. Ideally, it should not execute arbitrary commands passed as arguments. If a specific set of commands is allowed, they should be whitelisted. If the script is intended to run tests, it should explicitly call the test runner with fixed arguments, not execute a user-provided string. The skill's implementation should also ensure that any arguments passed to this script are strictly controlled and not derived directly from untrusted user input. | LLM | SKILL.md:60 | |
| HIGH | Execution of `npm test` on untrusted MR content The skill documentation suggests checking out an MR (`glab mr checkout 123`) and then running `npm test`. If the checked-out branch (from an untrusted MR) contains a malicious `package.json` file, the `npm test` command could execute arbitrary code defined in the `test` script within that `package.json`. This allows an attacker to execute arbitrary commands on the host system by submitting a malicious MR. When checking out and testing untrusted MRs, `npm test` (or similar package manager commands) should be run in a sandboxed, isolated, or ephemeral environment that prevents access to sensitive resources or the host filesystem. Alternatively, static analysis or a review process should ensure the `package.json` `test` script is safe before execution. The skill should explicitly warn users about this risk or provide a safer alternative. | LLM | SKILL.md:49 |
Scan History
Embed Code
[](https://skillshield.io/report/9c77c94b80eaacec)
Powered by SkillShield