Trust Assessment
prompt-optimizer received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via Script Execution.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via Script Execution The skill explicitly instructs the execution of external Python scripts (`scripts/evaluate.py`, `scripts/optimize.py`) and directly interpolates user-provided prompt text into the command line arguments. This allows for command injection if the user's prompt contains malicious shell commands (e.g., `"; rm -rf /"`). The 'Script Usage' section clearly indicates these are intended execution instructions. Implement robust input sanitization and validation for any user-provided text before it is passed as an argument to shell commands. When executing external processes, use a safe method such as `subprocess.run` with `shell=False` and pass arguments as a list, rather than constructing a single shell string. Ensure the called scripts (`evaluate.py`, `optimize.py`) also handle their arguments securely. | LLM | SKILL.md:90 |
Scan History
Embed Code
[](https://skillshield.io/report/e8d29d46aa3e5d5f)
Powered by SkillShield