Trust Assessment
auto-skill received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 6 findings: 3 critical, 2 high, 1 medium, and 0 low severity. Key findings include File read + network send exfiltration, Sensitive path access: AI agent config, Persistent Prompt Injection via Global Configuration Modification.
The analysis covered 4 layers: dependency_graph, static_code_analysis, llm_behavioral_safety, manifest_analysis. The static_code_analysis layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 636a2696). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Unknown | /tmp/skillscan-clone-vd6c0xgo/repo/SKILL.md:16 | |
| CRITICAL | Persistent Prompt Injection via Global Configuration Modification The skill's manifest and 'Self-Bootstrapping' section explicitly instruct the LLM to locate and modify its own global configuration/instruction files (e.g., `~/.claude/CLAUDE.md`, `~/.codex/instructions.md`). This is a severe form of prompt injection, attempting to permanently alter the LLM's core behavior and enforce specific protocols across all future tasks, even outside the current session. This grants the skill persistent control over the LLM's operational directives. Remove all instructions that direct the LLM to read from or write to its own global configuration or instruction files. LLMs should operate within their designated skill context and not attempt to modify their foundational programming or system-level instructions. | Unknown | SKILL.md:1 | |
| CRITICAL | Arbitrary File System Read/Write to Global Configuration The 'Self-Bootstrapping' section instructs the LLM to '讀取該文件' (read the file) and '在文件末尾追加以下內容' (append content to the end of the file) for global configuration files like `~/.claude/CLAUDE.md`. If the LLM has access to file system tools, this is a direct instruction for arbitrary file system read and write operations on critical system files. This could lead to unauthorized modification of the LLM's environment or data exfiltration if the paths were manipulated. Prevent the LLM from accessing or modifying files outside its designated skill directory. Implement strict sandboxing and disallow file system operations on system configuration paths. If file modification is necessary, use a dedicated, sandboxed tool with explicit user confirmation and strict path validation. | Unknown | SKILL.md:20 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Unknown | /tmp/skillscan-clone-vd6c0xgo/repo/SKILL.md:16 | |
| HIGH | Persistent Prompt Injection and Arbitrary File Write via User-Controlled Paths Sections '5. 任務結束:主動記錄' and '動態分類' explicitly allow the LLM to write new `.md` files to `experience/skill-[skill-id].md` and `knowledge-base/[category].md`. The '動態分類' section confirms that the '分類名稱' (category name) can be provided by the user. This creates a path traversal vulnerability, allowing a malicious user to specify paths outside the intended directories (e.g., `../../../../etc/passwd` as a category name) and write arbitrary content. Since these `.md` files are later read by the LLM, this also constitutes a persistent prompt injection vector, allowing an attacker to inject malicious instructions that will be executed in future interactions. Implement strict input validation and sanitization for `skill-id` and `category` names to prevent path traversal. Ensure that only alphanumeric characters and a limited set of safe symbols are allowed. Always resolve and validate file paths before any write operation to ensure they remain within the intended, sandboxed directories. Additionally, implement content sanitization for any user-provided text written to files that will later be processed by the LLM to prevent prompt injection. | Unknown | SKILL.md:78 | |
| MEDIUM | Command Injection and Supply Chain Risk via Suggested Shell Command The 'QMD 升級' section instructs the LLM to '主動建議用戶安裝 QMD' by providing a direct shell command: `npm install -g qmd && qmd collection add knowledge-base --name auto-skill && qmd embed`. While presented as a suggestion to the user, if the LLM has access to shell execution tools, it could interpret this as an instruction to execute. This poses a direct command injection risk. Furthermore, installing a global npm package (`qmd`) introduces a supply chain risk, as a compromised or malicious package could execute arbitrary code on the host system. Avoid instructing the LLM to suggest or execute shell commands directly. If external tools are required, provide clear, human-readable instructions for the user to follow manually, or integrate them through secure, sandboxed APIs. If an LLM-managed environment requires external dependencies, use a secure, audited dependency management system. | Unknown | SKILL.md:139 |
Scan History
Embed Code
[](https://skillshield.io/report/bef9701f89961a99)
Powered by SkillShield