Trust Assessment
finishing-a-development-branch received a trust score of 78/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection in 'gh pr create' title, Potential Command Injection via dynamic test command execution.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection in 'gh pr create' title The skill instructs the LLM to create a GitHub Pull Request using `gh pr create --title "<title>"`. The `<title>` placeholder is not explicitly defined as being sanitized or restricted. If the LLM allows user input to populate this title directly without proper sanitization, an attacker could inject arbitrary shell commands (e.g., `"; rm -rf /"`) into the title, leading to command execution on the host system. Ensure that any user-provided input for the PR title is strictly sanitized or validated to prevent shell metacharacters from being executed. Consider using a dedicated API call for PR creation if available, or escape the title string before passing it to the shell command. | LLM | SKILL.md:96 | |
| HIGH | Potential Command Injection via dynamic test command execution The skill repeatedly uses a placeholder `<test command>` (e.g., `npm test / cargo test / pytest / go test ./...`) to instruct the LLM to run project-specific tests. If the LLM constructs this command based on untrusted input (e.g., user-provided test commands, or dynamically determined commands from potentially malicious project files), it could lead to arbitrary command execution on the host system. The LLM should strictly define and limit the test commands it executes. If user input influences the test command, it must be thoroughly sanitized and validated against a whitelist of allowed commands and arguments. Prefer executing known, safe commands or using a sandboxed environment for test execution. | LLM | SKILL.md:21 |
Scan History
Embed Code
[](https://skillshield.io/report/851f7bf388187dd0)
Powered by SkillShield