Trust Assessment
mgrep-code-search received a trust score of 71/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 1 high, 2 medium, and 0 low severity. Key findings include Unpinned Third-Party Dependency, Potential Data Exfiltration via AI Synthesis, Potential Command Injection via User Input.
The analysis covered 4 layers: manifest_analysis, llm_behavioral_safety, dependency_graph, static_code_analysis. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 11, 2026 (commit 326f2466). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unpinned Third-Party Dependency The skill uses `bunx @mixedbread/mgrep` without specifying a version. This means that `bunx` will always fetch the latest version of the `@mixedbread/mgrep` package. This introduces a supply chain risk, as a malicious update to the package could be automatically pulled and executed, potentially compromising the system. Pin the dependency to a specific, known-good version (e.g., `bunx @mixedbread/mgrep@1.2.3 watch`) to prevent unexpected or malicious updates. Regularly review and update the pinned version. | Unknown | SKILL.md:30 | |
| MEDIUM | Potential Data Exfiltration via AI Synthesis The skill describes an option (`-a, --answer`) to 'Generate AI-powered synthesis of results'. This implies that code snippets or search results from the local codebase could be sent to an external AI service for processing. If the codebase contains sensitive, proprietary, or confidential information, using this option could lead to unintended data exfiltration to a third-party AI provider. Warn users about the implications of using the `-a` (AI synthesis) option, especially when dealing with sensitive code. Advise against using this feature for proprietary or confidential codebases, or ensure that the AI service used complies with data privacy and security policies. | Unknown | SKILL.md:56 | |
| MEDIUM | Potential Command Injection via User Input The skill demonstrates executing shell commands using `bunx @mixedbread/mgrep` where parts of the command (e.g., the search query or path) are expected to be provided by the user. If an agent directly interpolates untrusted user input into these commands without proper sanitization or escaping, a malicious user could inject arbitrary shell commands (e.g., `"query"; rm -rf /`) leading to command injection vulnerabilities. When constructing commands that include user-provided input (like search queries or paths), ensure that all untrusted input is properly sanitized and escaped to prevent shell metacharacters from being interpreted as commands. Consider using a library or function specifically designed for safe command execution or argument escaping. | Unknown | SKILL.md:37 |
Scan History
Embed Code
[](https://skillshield.io/report/c0d7c75ec7025858)
Powered by SkillShield