Trust Assessment
Nano Hub received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Command Injection and Data Exfiltration via `curl`, Prompt Injection in Sub-agent Delegation.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection and Data Exfiltration via `curl` The skill explicitly uses a `curl` command to upload local files to an external service (`catbox.moe`). The `图片路径` (image path) parameter is likely derived from user input, as indicated by '用户上传图片时'. If an attacker can control or influence this path, they can specify arbitrary local files (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, environment variable files) for exfiltration. Furthermore, if the execution environment does not properly sanitize the input, this could lead to command injection by appending shell commands to the file path (e.g., `foo.png; rm -rf /`). The skill's own description 'curl 直接从本地读取文件上传,不受字符限制' highlights the broad capability to read local files. Remove direct shell execution of `curl` with user-controlled file paths. If file uploads are necessary, use a secure, sandboxed API provided by the platform that strictly validates file types and paths, and does not allow arbitrary local file access. Ensure any file paths are canonicalized and strictly validated against allowed directories and file types, or use a dedicated file upload tool that does not expose the underlying filesystem. | LLM | SKILL.md:90 | |
| HIGH | Prompt Injection in Sub-agent Delegation The skill constructs a prompt for a `generalPurpose` sub-agent by directly interpolating user-provided content, specifically `用户的具体需求描述` (user's specific requirements description) and `图片 URL` (image URL). A malicious user could craft these inputs to include instructions that manipulate the sub-agent's behavior, leading to prompt injection. This could cause the sub-agent to generate unintended or harmful content, disclose internal information, or deviate from its intended task, despite the `readonly: true` setting. Implement robust input sanitization and validation for all user-provided content before it is incorporated into prompts for sub-agents. Consider using templating engines that escape user input, or explicitly define separate parameters for user input rather than directly embedding it into the prompt string. If direct embedding is unavoidable, use a strict allowlist for acceptable characters and content, and escape any characters that could be interpreted as instructions by the LLM. | LLM | SKILL.md:117 |
Scan History
Embed Code
[](https://skillshield.io/report/01a2a82dac3fb27f)
Powered by SkillShield