Trust Assessment
find-skills received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 2 critical, 0 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via `npx skills` arguments, Supply Chain Risk and Excessive Permissions from installing untrusted packages.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via `npx skills` arguments The skill explicitly instructs the LLM to execute `npx skills` commands, where arguments such as `[query]`, `<package>`, `<owner/repo@skill>`, and `my-xyz-skill` are intended to be derived from user input or contextual information. Without proper sanitization and shell escaping of these arguments before they are passed to the underlying shell, a malicious user could inject arbitrary shell commands, leading to unauthorized execution. Implement robust input sanitization and shell escaping for all arguments passed to `npx skills` commands. Ensure that user-provided strings are never directly interpolated into shell commands without proper handling. Consider using a safe execution mechanism that prevents command chaining or argument injection. | LLM | SKILL.md:39 | |
| CRITICAL | Supply Chain Risk and Excessive Permissions from installing untrusted packages The skill instructs the LLM to install external packages using `npx skills add <package>` or `npx skills add <owner/repo@skill> -g -y`. This introduces a significant supply chain risk as the source of these packages is not verified, allowing for the potential installation of malicious or compromised code. Furthermore, the use of the `-g` flag for global installation grants these potentially untrusted packages broad execution permissions across the user's environment, leading to excessive permissions and a wider attack surface. 1. **Restrict installation sources**: Only allow installation from a curated, trusted registry or a whitelist of known safe packages/repositories. 2. **Version pinning**: Instruct the LLM to always specify exact versions or commit hashes for installed packages to prevent unexpected or malicious updates. 3. **Scope reduction**: Avoid global installations (`-g`) unless absolutely necessary; prefer local installations or isolated environments. 4. **User confirmation**: Always require explicit user confirmation before installing any new skill, especially from external sources. 5. **Security scanning**: Integrate automated security scanning of skill packages before installation. | LLM | SKILL.md:30 |
Scan History
Embed Code
[](https://skillshield.io/report/de903fbef71b8478)
Powered by SkillShield