Trust Assessment
humanize-ai-text received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Command Injection via unescaped user input in shell commands, Broad file system access via Read, Write, and Glob permissions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via unescaped user input in shell commands The skill declares `Shell` permission in its manifest. The `SKILL.md` examples demonstrate executing Python scripts via shell commands where user-controlled filenames (e.g., `$f` from `*.txt` or `*.md` glob patterns) are directly interpolated into the shell command string. If a malicious filename (e.g., `"; rm -rf /; #.txt"`) is present, it could lead to arbitrary command execution on the host system. Avoid using the `Shell` permission if possible. If shell execution is strictly necessary, use `subprocess.run` with `shell=False` and pass arguments as a list, or ensure all user-controlled variables are rigorously escaped using `shlex.quote()` before interpolation into shell commands. | LLM | SKILL.md:160 | |
| HIGH | Broad file system access via Read, Write, and Glob permissions The skill declares `Read`, `Write`, and `Glob` permissions. The Python scripts (`detect.py`, `transform.py`, `compare.py`) use `Path(args.input).read_text()` and `Path(args.output).write_text()` which allow reading from and writing to arbitrary file paths specified by the user. The `Glob` permission allows listing files in specified directories. This combination grants broad access to the file system, potentially allowing an attacker to read sensitive files, overwrite critical system files, or exfiltrate data if the output path is controlled. While necessary for the skill's stated function, the lack of path validation or sandboxing makes these permissions excessive for a general-purpose text processing skill. Implement strict path validation to ensure file operations are confined to expected directories (e.g., a temporary sandbox or user-specific directories). Avoid allowing arbitrary file paths for input/output. If `Glob` is used, restrict its scope to specific, non-sensitive directories. Consider using a sandboxed environment for skill execution. | LLM | Manifest:1 |
Scan History
Embed Code
[](https://skillshield.io/report/cd30099dffe0056b)
Powered by SkillShield