Trust Assessment
find-bugs received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 0 high, 1 medium, and 1 low severity. Key findings include Shell Command Execution, Broad Local File System Read Access.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Broad Local File System Read Access The skill explicitly instructs the AI agent to read local files, stating: 'If output is truncated, read each changed file individually until you have seen every changed line' and 'List all files modified in this branch before proceeding'. Furthermore, subsequent instructions like 'For each changed file, identify and list: All user inputs...' imply deep content analysis of these files. This grants the agent broad read access to the local repository's file system. While necessary for a code analysis skill, this permission is powerful and could be abused if the agent were compromised or if the skill were maliciously modified to exfiltrate sensitive file contents. Ensure the AI agent operates within a strictly sandboxed environment that limits file system access to only the necessary scope (e.g., the current repository or specific directories). Implement data loss prevention (DLP) mechanisms to prevent the exfiltration of sensitive information read from local files. | LLM | SKILL.md:20 | |
| LOW | Shell Command Execution The skill explicitly instructs the AI agent to execute a shell command: `git diff $(gh repo view --json defaultBranchRef --jq '.defaultBranchRef.name')...HEAD`. While this command is integral to the skill's function of analyzing local code changes, it grants the agent shell execution capabilities. If the agent's execution environment is not properly sandboxed, or if the command's inputs could be manipulated by an attacker, it could lead to arbitrary command execution. The command itself does not appear to have an obvious injection vulnerability from untrusted user input in this specific context. Ensure the AI agent operates within a strictly sandboxed environment that limits shell access to only necessary commands and prevents arbitrary command execution. Implement robust input validation and sanitization if any part of the command could be derived from untrusted sources. | LLM | SKILL.md:19 |
Scan History
Embed Code
[](https://skillshield.io/report/178b16198eefba8c)
Powered by SkillShield