Trust Assessment
seo-optimizer received a trust score of 74/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Unpinned External Dependency, Potential Command Injection via Unsanitized Arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unpinned External Dependency The skill explicitly instructs the installation of an external dependency, 'OpenClawCLI', without specifying a version. This introduces a supply chain risk as a future malicious or incompatible version could be downloaded and executed, potentially compromising the system or skill functionality. Without version pinning, there's no guarantee of consistency or security over time. Specify a precise version for 'OpenClawCLI' (e.g., 'OpenClawCLI v1.2.3') and ideally provide a checksum or hash for verification. If possible, host the dependency or a verified mirror in a controlled environment to ensure integrity. | LLM | SKILL.md:5 | |
| HIGH | Potential Command Injection via Unsanitized Arguments The skill instructs the use of shell commands like `python scripts/seo_analyzer.py <directory_or_file>` and `python scripts/generate_sitemap.py <directory> <base_url>`. These commands take arguments that are likely to be derived from untrusted user input (e.g., file paths, URLs). The skill does not provide any guidance or mechanisms for sanitizing these arguments before execution. An LLM following these instructions without proper input validation could be vulnerable to command injection if a malicious user provides specially crafted input (e.g., `'; rm -rf /'`). Instruct the LLM to sanitize all user-provided arguments before passing them to shell commands. This could involve using a safe argument parsing library, quoting arguments, or strictly validating input against expected patterns (e.g., valid file paths, URLs). Explicitly state that user input must be sanitized to prevent malicious command execution. | LLM | SKILL.md:28 |
Scan History
Embed Code
[](https://skillshield.io/report/55572fab3e3f8f7f)
Powered by SkillShield