Trust Assessment
content-creator received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via untrusted arguments to Python scripts, Arbitrary file reading leading to data exfiltration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via untrusted arguments to Python scripts The skill instructs the AI agent to execute Python scripts (`brand_voice_analyzer.py`, `seo_optimizer.py`) via shell commands, passing user-provided file paths and keywords as arguments. If these arguments are not properly sanitized or shell-escaped before the shell command is constructed and executed, an attacker could inject arbitrary shell commands. For example, providing a file path like `'; rm -rf /'` or a keyword like `$(cat /etc/passwd)` could lead to arbitrary code execution or data exfiltration on the host system. When constructing shell commands to execute Python scripts, ensure all user-provided arguments (file paths, keywords) are strictly validated and properly shell-escaped. A safer approach is to use `subprocess.run` with `shell=False` and pass arguments as a list, or to call the Python functions directly within the agent's environment, passing file *content* rather than file *paths*. | LLM | SKILL.md:60 | |
| HIGH | Arbitrary file reading leading to data exfiltration Both `brand_voice_analyzer.py` and `seo_optimizer.py` are designed to read the content of a file specified by a command-line argument. If the AI agent allows users to provide arbitrary file paths without strict validation or sandboxing, an attacker could instruct the skill to read sensitive system files (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, `/app/secrets.txt`). The content of these files would then be processed by the Python scripts and potentially included in their output (e.g., analysis results, readability scores, keyword densities), leading to data exfiltration. The skill description does not indicate any restrictions on file access scope. Implement strict validation and sandboxing for all file paths provided by users. File access should be restricted to a designated, non-sensitive directory. Avoid allowing arbitrary file paths. Consider passing the *content* of the file directly to the Python functions from the agent's environment, rather than allowing the Python script to open arbitrary paths. If file paths must be used, ensure they are canonicalized and checked against an allow-list or a secure base directory. | LLM | SKILL.md:60 |
Scan History
Embed Code
[](https://skillshield.io/report/cd70944715659a08)
Powered by SkillShield