Trust Assessment
pulumi-upgrade-provider received a trust score of 29/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 5 findings: 1 critical, 2 high, 2 medium, and 0 low severity. Key findings include Command Injection via untrusted variables, Command Injection via untrusted repository scripts, Prompt Injection: Contradictory Guardrails.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Static Code Analysis layer scored lowest at 56/100, indicating areas for improvement.
Last analyzed on April 1, 2026 (commit bbf441e6). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection: Contradictory Guardrails The 'Guardrails' section explicitly states: 'Never commit, push, or create branches manually; only run read-only git commands.' However, the 'Post-run Tasks' section later instructs the agent to perform a `gh api -X PATCH` operation, which is a write operation to a GitHub Pull Request. Furthermore, the guardrails explicitly allow `./scripts/upstream.sh checkout|rebase|check_in`, which can modify git state. This contradiction attempts to confuse or override the LLM's understanding of its own security rules, which is a form of prompt injection. Ensure that all instructions and guardrails provided to the LLM are consistent and do not contradict each other. If write operations are intended, the guardrails should accurately reflect the allowed scope of actions. | LLM | SKILL.md:88 | |
| HIGH | Command Injection via untrusted variables The skill executes `upgrade-provider $ORG/$REPO`. If the `$ORG` or `$REPO` variables are derived from untrusted user input without proper sanitization, an attacker could inject arbitrary shell commands. For example, `$ORG=foo; rm -rf /;` could lead to critical system compromise. Ensure that `$ORG` and `$REPO` variables are strictly validated and sanitized to prevent shell metacharacters from being interpreted as commands. Consider using a safer method to pass arguments that avoids direct shell interpolation, or explicitly quoting variables. | Static | SKILL.md:15 | |
| HIGH | Command Injection via untrusted repository scripts The skill explicitly instructs the agent to execute `./scripts/upstream.sh checkout` (and later mentions `rebase|check_in` are also allowed). If the target repository is untrusted (e.g., a user-provided GitHub repository), this allows the execution of arbitrary code from that repository, leading to potential system compromise. Avoid executing arbitrary scripts from untrusted or user-controlled repositories. If such execution is unavoidable, implement strict sandboxing, code review, or whitelisting mechanisms to ensure the safety of the scripts. | Static | SKILL.md:29 | |
| MEDIUM | Excessive Permissions: Write access to GitHub PRs The skill performs a `gh api -X PATCH` operation to modify the body of a GitHub Pull Request. This requires the agent's GitHub token to have write permissions to the repository's PRs. While this might be a legitimate function of the skill, it contradicts the stated 'read-only git commands' guardrail and highlights the need for careful scoping of the agent's GitHub token to the minimum necessary permissions. Ensure the agent's GitHub token is scoped with the principle of least privilege, granting only the necessary permissions (e.g., `pull_requests:write` for the specific repository if required). Reconcile the skill's stated guardrails with its actual operational requirements. | Static | SKILL.md:77 | |
| MEDIUM | Potential Data Exfiltration via Public PR Body The skill instructs the LLM to append a section to a GitHub PR body, including a list of 'concrete unblocker edits here, with file paths and intent'. If these edits contain sensitive information (e.g., internal system details, proprietary code snippets, or security vulnerabilities) that should not be publicly exposed, this could lead to unintended data exfiltration when the PR is made public. Add explicit instructions to the LLM to redact or generalize any sensitive information from the 'concrete unblocker edits' before they are appended to a public GitHub PR. Implement a review step for the generated PR body if sensitive information is a concern. | Static | SKILL.md:69 |
Scan History
Embed Code
[](https://skillshield.io/report/8069b64341da5ac6)
Powered by SkillShield