Trust Assessment
merge-resolver received a trust score of 82/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unpinned npm package dependency, Tool requires broad filesystem access.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unpinned npm package dependency The skill uses `npx ai-merge-resolve` without specifying a version. This means that each execution will fetch and run the latest available version of the `ai-merge-resolve` package from npm. If the package maintainer's account is compromised, or if a malicious version is published, the AI agent could execute arbitrary malicious code without explicit user consent or review of the specific version being run. Specify a precise version for the `ai-merge-resolve` package, e.g., `npx ai-merge-resolve@1.2.3 [file]`, or use a lockfile mechanism if `npx` supports it in this context. Alternatively, bundle the dependency or use a trusted, audited registry. | LLM | SKILL.md:8 | |
| MEDIUM | Tool requires broad filesystem access The `ai-merge-resolve` tool is described as reading 'conflicted files' and can operate on a specific file or 'all conflicts in the repo'. This implies broad read access to the local filesystem, particularly within a Git repository. If the AI agent is granted the ability to execute this tool without proper sandboxing or user confirmation for file access, it could potentially read sensitive files beyond its intended scope, especially if combined with a malicious or compromised version of the tool. Implement strict sandboxing for skill execution environments. Ensure the AI agent requests explicit user confirmation before accessing or modifying files, especially when operating on an entire repository. Limit the directories the tool can access to only those strictly necessary for its function. | LLM | SKILL.md:3 |
Scan History
Embed Code
[](https://skillshield.io/report/5f389d2474f09824)
Powered by SkillShield