Trust Assessment
code-reviewer received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Prompt injection via instruction to open local file.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Prompt injection via instruction to open local file The skill contains a direct instruction for the LLM to 'open' a local file (`resources/implementation-playbook.md`). This is a prompt injection attempt that could manipulate the LLM's behavior to access and potentially exfiltrate the contents of local files. While the specific file path is hardcoded, this demonstrates a capability for the LLM to interact with the local filesystem, which could be exploited if the path were made dynamic or if other sensitive files were targeted. Remove or rephrase the instruction to 'open' a local file. Instead, provide the content of `resources/implementation-playbook.md` directly within the skill definition if it's meant to be part of the LLM's knowledge, or instruct the user to consult the file manually. Avoid giving the LLM direct commands to access the local filesystem. | LLM | SKILL.md:10 |
Scan History
Embed Code
[](https://skillshield.io/report/a158cf20f2409c0a)
Powered by SkillShield