Trust Assessment
skillbench received a trust score of 74/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Unpinned npm package dependency in installation instructions, Agent granted access to broad-capability CLI tool.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unpinned npm package dependency in installation instructions The installation instructions recommend installing the `@versatly/skillbench` package globally without specifying a version (`npm install -g @versatly/skillbench`). This means that `npm install` will always fetch the latest version. If a malicious update is pushed to this package, it could be automatically installed, leading to a supply chain compromise. This is a common risk for globally installed CLI tools. Pin the package version in the installation instructions, e.g., `npm install -g @versatly/skillbench@1.0.0`, or use a lockfile mechanism if applicable, to ensure deterministic and secure installations. | LLM | SKILL.md:12 | |
| HIGH | Agent granted access to broad-capability CLI tool The skill manifest grants the AI agent access to the `skillbench` CLI tool via `"requires": {"bins": ["skillbench"]}`. As described in the skill documentation, `skillbench` has broad capabilities including filesystem read/write (e.g., `export`, `dashboard`, `badge`), network operations (e.g., `sync --vault`, `sync --clawhub`), and potentially invoking other system commands (e.g., for testing other skills). Granting an AI agent access to such a powerful tool without explicit sandboxing or fine-grained permission controls introduces a significant risk of misuse, data exfiltration, or system compromise if the agent's behavior is compromised or misaligned. Implement fine-grained access control for the `skillbench` tool, allowing the agent to only execute specific subcommands or with limited parameters. Consider sandboxing the execution environment for the agent when using this tool. Ensure that sensitive operations (e.g., data export, sync to external services, opening browser) require explicit user confirmation or are restricted. | LLM | Manifest:3 |
Scan History
Embed Code
[](https://skillshield.io/report/31dd8fee03f59046)
Powered by SkillShield