Security Audit
github-workflow-automation
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
github-workflow-automation received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 14 findings: 2 critical, 3 high, 7 medium, and 2 low severity. Key findings include Persistence / self-modification instructions, Sensitive environment variable access: $GITHUB_OUTPUT, Persistence mechanism: Shell RC file modification.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 23/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings14
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Persistence / self-modification instructions Shell RC file modification for persistence Remove any persistence mechanisms. Skills should not modify system startup configurations, crontabs, LaunchAgents, systemd services, or shell profiles. | Manifest | skills/github-workflow-automation/SKILL.md:118 | |
| CRITICAL | Persistence / self-modification instructions Shell RC file modification for persistence Remove any persistence mechanisms. Skills should not modify system startup configurations, crontabs, LaunchAgents, systemd services, or shell profiles. | Manifest | skills/github-workflow-automation/SKILL.md:120 | |
| HIGH | Shell command injection via git diff output in file iteration The 'AI Review with context' step in section 1.3 iterates over filenames obtained from `git diff --name-only`. If a malicious actor can commit a file with a name containing shell metacharacters (e.g., `malicious;command.js` or `$(rm -rf /).js`), these characters could be interpreted and executed by the shell when the `for file in ...` loop processes the filename, or when `$(cat $file)` attempts to read a file whose name contains command substitution. This leads to arbitrary command execution. When processing filenames from untrusted sources like `git diff --name-only`, ensure they are properly quoted or handled in a way that prevents shell metacharacter interpretation. For example, use `git diff --name-only -z` and process with `xargs -0`, or use a scripting language (like Python) that handles filenames safely. If using Bash, consider `while IFS= read -r file; do ... done <<< "$files"` and always quote variables like `"$file"` when used in commands, especially `cat -- "$file"`. | LLM | SKILL.md:103 | |
| HIGH | Potential command injection via untrusted input in `exec` calls The `smartCherryPick` function in section 4.2 uses `exec` calls where `commitHash` and `targetBranch` variables are directly interpolated into shell commands (e.g., `git show ${commitHash}`, `git diff ${targetBranch}...`, `git cherry-pick ${commitHash}`). If these variables are derived from untrusted user input (e.g., from a comment, issue, or other external source), a malicious actor could inject arbitrary shell commands by crafting a specially formed hash or branch name, leading to arbitrary code execution. Always sanitize or validate any user-controlled input before using it in shell commands. Prefer using libraries or APIs that execute commands with arguments passed as a list (e.g., `subprocess.run(['git', 'show', commitHash])` in Python) rather than string interpolation, as this prevents shell interpretation of arguments. If string interpolation is necessary, ensure robust escaping. | LLM | SKILL.md:370 | |
| HIGH | Direct prompt injection via user comments in AI mention bot The '@mention Bot' workflow in section 5.1 directly concatenates user-provided `QUESTION` (extracted from `github.event.comment.body`) and `CONTEXT` (PR diff or issue body) into the AI prompt. A malicious user can easily include instructions like 'ignore previous instructions' or 'act as a malicious entity' in their comment, directly manipulating the AI's behavior and potentially leading to harmful or unintended responses. Implement robust prompt injection defenses. This typically involves wrapping user input in explicit data tags (e.g., `<user_question>...</user_question>`) and providing strong system instructions to the AI to treat content within these tags as data, not instructions. Consider using a separate, hardened prompt for user-facing interactions. | LLM | SKILL.md:500 | |
| MEDIUM | Sensitive environment variable access: $GITHUB_OUTPUT Access to sensitive environment variable '$GITHUB_OUTPUT' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/github-workflow-automation/SKILL.md:46 | |
| MEDIUM | Persistence mechanism: Shell RC file modification Detected Shell RC file modification pattern. Persistence mechanisms allow malware to survive system restarts. Review this persistence pattern. Skills should not modify system startup configuration. | Static | skills/github-workflow-automation/SKILL.md:118 | |
| MEDIUM | Persistence mechanism: Shell RC file modification Detected Shell RC file modification pattern. Persistence mechanisms allow malware to survive system restarts. Review this persistence pattern. Skills should not modify system startup configuration. | Static | skills/github-workflow-automation/SKILL.md:120 | |
| MEDIUM | Potential prompt injection via PR diff content The AI review prompt in section 1.1 directly concatenates the `git diff` output (`${{ steps.diff.outputs.diff }}`) and changed filenames (`${{ steps.changed.outputs.files }}`) into the AI model's input. A malicious actor could craft a commit with diff content or filenames designed to manipulate the AI's instructions, potentially leading to biased reviews, generation of harmful content, or other unintended behaviors. Implement sanitization or structured input for AI prompts. For example, wrap user-controlled content in XML-like tags (e.g., `<diff>...</diff>`) or use specific API parameters for different content types if the AI model supports it. Instruct the AI model to treat content within these tags as data, not instructions. | LLM | SKILL.md:59 | |
| MEDIUM | Potential prompt injection via issue title and body The `TRIAGE_PROMPT` template in section 2.2 directly embeds the issue `title` and `body` (`Title: {title}\nBody: {body}`) into the AI prompt. A malicious user could craft an issue title or body containing instructions intended to manipulate the AI's behavior, such as forcing it to assign incorrect labels, generate misleading summaries, or bypass triage rules. Wrap user-controlled issue title and body in structured tags (e.g., `<issue_title>...</issue_title>`) and instruct the AI to treat content within these tags as data, not instructions. | LLM | SKILL.md:170 | |
| MEDIUM | Potential prompt injection via Git commit messages The 'AI Risk Assessment' step in section 3.2 sends `git log --oneline` output (`${process.env.CHANGES}`) directly to the AI for risk assessment. A malicious actor could craft commit messages containing instructions designed to manipulate the AI's risk assessment, potentially leading to an incorrect 'low' risk rating for a high-risk deployment, or bypassing manual approval requirements. Wrap the `CHANGES` content in structured tags (e.g., `<git_log>...</git_log>`) and instruct the AI to treat it as data. | LLM | SKILL.md:290 | |
| MEDIUM | Potential prompt injection via Git commit info and diffs The `smartCherryPick` function in section 4.2 constructs an AI prompt using `git show` output (`commitInfo`) and `git diff` output (`targetDiff`). If `commitHash` or `targetBranch` are derived from untrusted input, a malicious actor could craft commit messages or diff content to manipulate the AI's conflict analysis or resolution strategy. Ensure `commitInfo` and `targetDiff` are properly sanitized or wrapped in structured tags when passed to the AI, and instruct the AI to treat content within these tags as data. | LLM | SKILL.md:390 | |
| LOW | Missing `issues: write` permission for creating issues The 'Branch Cleanup' workflow in section 4.3 attempts to create a new issue (`github.rest.issues.create`) to report stale branches. However, the `cleanup` job does not explicitly declare `permissions: issues: write`. Without this permission, the `github.rest.issues.create` call will likely fail due to insufficient permissions, preventing the issue from being created. Add `permissions: issues: write` to the `cleanup` job definition to grant the necessary permissions for creating issues. | LLM | SKILL.md:449 | |
| LOW | Missing `issues: write` permission for creating comments The '@mention Bot' workflow in section 5.1 attempts to create a comment (`github.rest.issues.createComment`) in response to a user mention. However, the `respond` job does not explicitly declare `permissions: issues: write`. Without this permission, the `github.rest.issues.createComment` call will likely fail due to insufficient permissions, preventing the bot from responding. Add `permissions: issues: write` to the `respond` job definition to grant the necessary permissions for creating comments. | LLM | SKILL.md:488 |
Scan History
Embed Code
[](https://skillshield.io/report/c59930234d6777ce)
Powered by SkillShield