Security Audit
github-workflow-automation
github.com/davila7/claude-code-templatesTrust Assessment
github-workflow-automation received a trust score of 55/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 7 findings: 0 critical, 4 high, 2 medium, and 1 low severity. Key findings include Network egress to untrusted endpoints, Covert behavior / concealment directives, User input directly embedded in AI prompt in @mention Bot.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 458b1186). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings7
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | User input directly embedded in AI prompt in @mention Bot The `@mention Bot` workflow's `AI Response` step directly embeds user-provided comment body (`github.event.comment.body`) and repository content (PR diff or issue body) into the AI prompt without apparent sanitization. An attacker could craft a malicious comment to manipulate the AI's behavior, potentially leading to unintended actions or information disclosure. Implement robust sanitization or input validation for `process.env.CONTEXT` and `process.env.QUESTION` before embedding them into the AI prompt. Consider using structured input for the AI or a separate LLM call to validate/sanitize user input. | LLM | SKILL.md:300 | |
| HIGH | Issue title and body directly embedded in AI prompt for triage The `Issue Triage` workflow calls an `analyzeIssue` function (implementation not shown, but implied by the `TRIAGE_PROMPT` template) which directly embeds the issue title and body into the AI prompt. An attacker creating an issue could craft a malicious title or body to manipulate the AI's classification, labeling, or automated responses, potentially leading to miscategorized issues, incorrect priority assignments, or unwanted comments. Implement robust sanitization or input validation for the issue title and body before embedding them into the AI prompt. Consider using structured input for the AI or a separate LLM call to validate/sanitize user input. | LLM | SKILL.md:140 | |
| HIGH | `exec` calls with unsanitized parameters in Smart Cherry-Pick example The `smartCherryPick` TypeScript example demonstrates direct use of `exec` with variables like `commitHash`, `targetBranch`, and `affectedFiles`. If these parameters are derived from untrusted user input (e.g., a user-provided commit hash or branch name), an attacker could inject arbitrary shell commands. While this is an example, it illustrates a dangerous pattern that, if implemented without proper sanitization, would be a critical command injection vulnerability. Any parameters passed to `exec` or similar shell execution functions must be thoroughly sanitized and validated to ensure they do not contain malicious shell commands or arguments. Prefer using libraries that abstract shell commands and provide safe argument passing. | LLM | SKILL.md:250 | |
| HIGH | Force push capability in automated rebase workflow The `Auto Rebase` workflow's `Rebase PR` step uses `git push --force-with-lease` with the `GITHUB_TOKEN`. This grants the workflow the ability to overwrite branch history. While triggered by a comment, if the comment parsing or the PR context is compromised, an attacker could potentially force push malicious changes or erase legitimate history on branches within the repository (for non-fork PRs). Carefully consider if `git push --force-with-lease` is strictly necessary. If so, ensure extremely robust checks are in place to prevent misuse, such as requiring specific user roles, additional approvals, or limiting the branches it can operate on. For PRs from forks, `GITHUB_TOKEN` is read-only, mitigating some risk, but for same-repo branches, the risk is high. | LLM | SKILL.md:220 | |
| MEDIUM | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | cli-tool/components/mcps/devtools/figma-dev-mode.json:4 | |
| MEDIUM | Git log output directly embedded in AI prompt for risk assessment The `AI Risk Assessment` workflow step directly embeds the output of `git log --oneline` (stored in `process.env.CHANGES`) into the AI prompt without apparent sanitization. While `git log` output is typically controlled by repository committers, a malicious actor could craft commit messages designed to manipulate the AI's risk assessment, potentially leading to an incorrect assessment or bypassing security checks. Implement sanitization or validation for `process.env.CHANGES` before embedding it into the AI prompt. Consider using a separate LLM call to summarize or extract key information from commit messages in a structured way, rather than direct insertion. | LLM | SKILL.md:190 | |
| LOW | Covert behavior / concealment directives Multiple zero-width characters (stealth text) Remove hidden instructions, zero-width characters, and bidirectional overrides. Skill instructions should be fully visible and transparent to users. | Manifest | cli-tool/components/mcps/devtools/jfrog.json:4 |
Scan History
Embed Code
[](https://skillshield.io/report/921a3402895c1fc9)
Powered by SkillShield