Trust Assessment
blog-writer received a trust score of 84/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Potential Path Traversal in Filename for Saved Posts, Implicit Command Execution for Pruning Old Examples.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Implicit Command Execution for Pruning Old Examples The `SKILL.md` instructs the LLM to 'remove the 5 oldest' examples if the library exceeds 20. A supporting Python script, `manage_examples.py`, is provided which contains logic for identifying and deleting these files (`filepath.unlink()`) and is designed for command-line execution (`if __name__ == "__main__":`). If the LLM environment allows arbitrary command execution (e.g., via `subprocess.run`), the instruction to 'remove' files could lead the LLM to construct and execute a command like `python manage_examples.py prune --execute`. This constitutes a command injection vulnerability, as an attacker could potentially manipulate the LLM's interpretation of 'remove' or the arguments passed to the script, leading to unintended file deletions or other command execution. 1. Avoid instructing the LLM to perform actions that require direct shell command execution. 2. If file deletion is necessary, implement it via a secure, sandboxed API or tool call that does not expose shell access. 3. If `manage_examples.py` must be used, ensure the LLM is strictly constrained to only call it with predefined, safe arguments, and that the script itself has robust input validation and is executed in a highly restricted environment. | LLM | SKILL.md:106 | |
| MEDIUM | Potential Path Traversal in Filename for Saved Posts The skill instructs the LLM to save finalized blog posts to `references/blog-examples/` using a filename format `YYYY-MM-DD-slug-title.md`. If the `slug-title` portion of the filename is derived from untrusted user input without proper sanitization, an attacker could inject path traversal sequences (e.g., `../`) to write files to arbitrary locations outside the intended `references/blog-examples/` directory. This could lead to overwriting critical system files or exfiltrating data by writing it to publicly accessible directories. Implement strict validation and sanitization of the `slug-title` component to prevent path traversal characters (e.g., `../`, `/`, `\`) before constructing the filename. Ensure the LLM is explicitly instructed on how to sanitize this input. | LLM | SKILL.md:100 |
Scan History
Embed Code
[](https://skillshield.io/report/bdd0204c041a7e67)
Powered by SkillShield