Trust Assessment
writing-plans received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via Generated Plan Content.
The analysis covered 4 layers: dependency_graph, llm_behavioral_safety, manifest_analysis, static_code_analysis. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 11, 2026 (commit 6d52fe32). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via Generated Plan Content The skill instructs the agent to generate implementation plans that include shell commands and file paths. If user-provided input (e.g., feature names, component names, test names, file paths) is directly interpolated into these commands or paths without proper sanitization or escaping, it could lead to command injection when the generated plan is executed by another agent or a human. For example, a malicious feature name could inject arbitrary commands into a `git commit` message or a `pytest` command. The agent implementing this skill must ensure that all user-provided strings used to populate command templates (e.g., feature names, commit messages, test names) or file paths are thoroughly sanitized and escaped to prevent shell metacharacter injection. This includes validating file paths and escaping quotes or other special characters in command arguments. When possible, prefer using safer execution methods that pass arguments as lists rather than single command strings. | Unknown | SKILL.md:80 |
Scan History
Embed Code
[](https://skillshield.io/report/ce8e242acf7f8d7c)
Powered by SkillShield