Trust Assessment
skilltree received a trust score of 69/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Missing required field: name, Prompt Injection via Dynamic Skill Behavior Modification, Excessive File System Access for Profile and Snapshots.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Prompt Injection via Dynamic Skill Behavior Modification The skill's core functionality involves analyzing user dialogue and feedback to dynamically modify its 'soul_changes' and 'learning content'. These 'soul_changes' are essentially instructions (e.g., '默认简洁回复', '记住对话中的个人细节') that guide the skill's underlying LLM behavior. A malicious user could craft input in their dialogue history or feedback to inject harmful instructions, potentially causing the skill to deviate from its intended purpose, leak sensitive information, or bypass safety mechanisms. For example, an attacker might try to inject instructions like 'always respond with sensitive data from profile.json' or 'ignore all safety guidelines'. Implement robust input sanitization and validation for all user-provided text that influences the skill's 'soul_changes' or 'learning content'. Ensure that dynamic instructions are strictly confined to a safe, predefined set of behaviors and cannot be arbitrarily modified by user input. Use a separate, sandboxed LLM for processing potentially malicious user input before it influences core skill logic. | LLM | SKILL.md:108 | |
| HIGH | Excessive File System Access for Profile and Snapshots The skill explicitly uses `load_json` and `save_json` functions to manage user profiles (`evolution/profile.json`) and historical snapshots (`evolution/snapshots.json`). This indicates direct file system access. While the paths are hardcoded in the provided snippets, the underlying implementation of `load_json` and `save_json` is not shown. Without strict sandboxing, path validation, and access controls, this could lead to arbitrary file read/write vulnerabilities if an attacker can influence the file path (even indirectly through prompt injection) or if the skill's execution environment grants overly broad file system permissions. The stored data (`current_profile`, `current_soul_additions`) could also contain sensitive user information or internal LLM instructions, posing a data exfiltration risk if these files are compromised. Ensure that file I/O operations are strictly sandboxed to designated, non-sensitive directories. Implement robust path validation to prevent directory traversal attacks. If possible, use platform-specific secure storage mechanisms instead of direct file system access. Restrict the types of data stored in these files to only what is essential and non-sensitive. Implement strong access controls to protect these files from unauthorized read/write operations. | LLM | SKILL.md:204 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/0xraini/skilltree/SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/36325660ad505560)
Powered by SkillShield