Trust Assessment
receiving-code-review received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Skill attempts to inject behavioral instructions into LLM, Skill requests broad access to codebase and GitHub API.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Skill attempts to inject behavioral instructions into LLM The entire skill document, placed within untrusted input delimiters, contains extensive instructions and rules intended to dictate the LLM's behavior, responses, and decision-making process. This directly violates the analyzer's meta-instruction to treat content within these tags as untrusted data, not instructions. Examples include 'NEVER:', 'INSTEAD:', 'IF any item is unclear:', 'Signal if uncomfortable pushing back out loud: "Strange things are afoot at the Circle K"', and direct commands like 'DELETE IT'. Remove all behavioral instructions from the untrusted skill content, or ensure the skill is processed in a trusted context where its instructions are intended to be followed. If the skill is meant to be untrusted, it should not contain direct commands or behavioral programming for the LLM. | LLM | SKILL.md:10 | |
| HIGH | Skill requests broad access to codebase and GitHub API The skill explicitly instructs the AI to perform actions requiring significant permissions. The instruction 'grep codebase for actual usage' implies read access to the entire codebase, which could expose sensitive information. Additionally, the instruction to 'reply in the comment thread (`gh api repos/{owner}/{repo}/pulls/{pr}/comments/{id}/replies`)' implies write access to GitHub pull request comments, allowing the AI to post content on behalf of the user or system. These broad permissions, if granted without strict sandboxing, pose a risk of data exposure or unauthorized actions. Implement strict access controls and sandboxing for any tools or APIs the AI is allowed to interact with. Ensure the AI's access is limited to the minimum necessary scope (e.g., read-only access to specific files/directories, limited API tokens for GitHub with restricted permissions). | LLM | SKILL.md:106 |
Scan History
Embed Code
[](https://skillshield.io/report/c99d83fb1e424ea5)
Powered by SkillShield