Trust Assessment
blader/humanizer:root received a trust score of 42/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 3 findings: 1 critical, 2 high, 0 medium, and 0 low severity. Key findings include Second-Order Prompt Injection Vulnerability, Excessive Filesystem Permissions, Potential Data Exfiltration via Prompt Injection and Excessive Permissions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on March 14, 2026 (commit d8085c7d). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Second-Order Prompt Injection Vulnerability The skill's design, as described in the SKILL.md, involves taking untrusted user input, processing it, and then using the (potentially 'humanized') output in subsequent LLM prompts (e.g., for the 'final anti-AI pass'). If the initial processing does not fully neutralize malicious instructions embedded in the user's input, these instructions could be passed to the underlying LLM in the subsequent prompts. This allows a malicious user to potentially control the LLM's behavior during internal skill operations, leading to unintended actions, information disclosure, or manipulation of the skill's output beyond its intended scope. Implement robust input sanitization and validation before using any user-provided text in subsequent LLM prompts. Consider using a dedicated, sandboxed LLM call for safety checks or prompt neutralization. Explicitly define the boundaries of what the LLM should process from user input in sub-prompts, potentially by wrapping user content in XML tags or similar delimiters that the LLM is instructed to treat as data. | LLM | SKILL.md:300 | |
| HIGH | Excessive Filesystem Permissions The skill's manifest declares 'Read', 'Write', 'Edit', 'Grep', and 'Glob' permissions. A skill designed solely to 'Remove signs of AI-generated writing from text' should primarily operate on the text provided as input and does not inherently require broad access to the filesystem. These excessive permissions significantly increase the attack surface, allowing for potential data exfiltration, data tampering, or denial of service if exploited via prompt injection. Reduce the declared permissions to the absolute minimum required for the skill's intended functionality. For a text humanization skill, this likely means no filesystem access (Read, Write, Edit, Grep, Glob) is needed. If any specific file operations are truly necessary, narrow the scope (e.g., to a specific directory or file type) and ensure explicit user consent or strict validation for file paths. | Static | SKILL.md | |
| HIGH | Potential Data Exfiltration via Prompt Injection and Excessive Permissions The combination of a second-order prompt injection vulnerability (SS-LLM-001) and excessive filesystem permissions ('Read', 'Grep') creates a high risk of data exfiltration. A malicious user could craft an input that, when processed and used in subsequent LLM prompts, instructs the skill to read sensitive files (e.g., `/etc/passwd`, `.env` files, SSH keys, or other configuration files) from the filesystem and then include their contents, disguised or otherwise, within the 'humanized' output. This allows for unauthorized disclosure of sensitive information. Address the root causes: mitigate the prompt injection vulnerability (SS-LLM-001) and reduce excessive filesystem permissions (SS-PERM-001). Additionally, implement strict output filtering to prevent sensitive patterns (e.g., file paths, common credential formats) from appearing in the skill's final output, especially if they were not part of the original input. | Static | SKILL.md |
Scan History
Embed Code
[](https://skillshield.io/report/2db4f7d2d7de0ea2)
Powered by SkillShield