Trust Assessment
humanize-ai received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Skill declares 'Shell' permission, enabling arbitrary command execution, Shell command examples in documentation show potential for command injection, Broad 'Read', 'Write', and 'Shell' permissions enable potential data exfiltration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Skill declares 'Shell' permission, enabling arbitrary command execution The skill's manifest explicitly declares the 'Shell' tool, which grants the AI agent the ability to execute arbitrary shell commands. This is a highly privileged permission that significantly increases the attack surface for command injection, data exfiltration, and system compromise. While the provided Python scripts do not directly invoke shell commands, the skill's `SKILL.md` documentation demonstrates the intended use of shell commands (e.g., `python scripts/analyze.py`, `for` loops, `mv`), implying the agent is expected to use the 'Shell' tool to execute these. Re-evaluate the necessity of the 'Shell' permission. If shell execution is required, consider using more constrained tools (e.g., `subprocess.run` with explicit command arrays and `check=True`) or implementing strict input sanitization and validation if user input is used to construct commands. If possible, replace shell commands with direct Python API calls. | LLM | SKILL.md | |
| HIGH | Shell command examples in documentation show potential for command injection The `SKILL.md` documentation provides examples of shell commands, such as `for f in *.md; do ... python scripts/analyze.py "$f"; done` and `mv "$f.tmp" "$f"`. If the AI agent were to construct these commands dynamically based on untrusted filenames or paths provided by a user, and execute them via the declared 'Shell' tool, it could lead to command injection. For instance, a malicious filename like `"; rm -rf /; #.txt"` could be executed. Although the Python scripts themselves use `pathlib.Path` for file handling, the examples in the `SKILL.md` demonstrate direct shell command usage which is susceptible if not handled carefully by the agent. When executing shell commands, ensure all untrusted inputs (like filenames) are properly escaped or quoted to prevent command injection. Prefer using `subprocess.run` with a list of arguments rather than a single string command, and avoid `shell=True`. If the 'Shell' tool is used, ensure the agent's implementation explicitly sanitizes or quotes arguments. | LLM | SKILL.md:70 | |
| MEDIUM | Broad 'Read', 'Write', and 'Shell' permissions enable potential data exfiltration The skill declares 'Read', 'Write', and 'Shell' permissions. While the Python scripts only read/write specific files, the combination of these permissions allows an attacker to potentially instruct the AI agent to read arbitrary files from the filesystem (e.g., sensitive configuration files, user data) and then exfiltrate that data, for example, by writing it to a publicly accessible location or including it in the agent's response. The 'Shell' permission further exacerbates this by allowing arbitrary commands to locate and read files. Restrict file access permissions to the absolute minimum required. If 'Read' and 'Write' are necessary, ensure they operate within a confined directory or on specific file types. If 'Shell' is retained, implement strict sandboxing or output filtering to prevent sensitive data from being read or transmitted. | LLM | SKILL.md |
Scan History
Embed Code
[](https://skillshield.io/report/a499d1fe6d2db9f3)
Powered by SkillShield