Trust Assessment
chain-of-density received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection in `uv run` example, Potential Prompt Injection in subagent calls.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection in `uv run` example The `SKILL.md` provides an example of executing `scripts/text_metrics.py` using `uv run scripts/text_metrics.py metrics "your summary text"`. If the `"your summary text"` placeholder is replaced by unsanitized user-controlled input, and this command is executed directly in a shell, it can lead to command injection. An attacker could inject shell metacharacters (e.g., `"; rm -rf /;"`) to execute arbitrary commands on the host system. When constructing shell commands that include user-controlled input, ensure the input is properly escaped or quoted to prevent shell metacharacter interpretation. Alternatively, pass user input via stdin or environment variables where possible, or use a library function that safely executes commands (e.g., `subprocess.run` with `shell=False` and passing arguments as a list). | LLM | SKILL.md:106 | |
| HIGH | Potential Prompt Injection in subagent calls The `SKILL.md` demonstrates how to orchestrate subagents using `Task(subagent_type="cod-iteration", prompt="""...""")`. The `prompt` content includes placeholders like `[SOURCE TEXT HERE]`, `[PREVIOUS SUMMARY HERE]`, and `[ORIGINAL SOURCE TEXT HERE]`. If these placeholders are directly substituted with untrusted user-controlled input without sanitization or validation, an attacker could inject malicious instructions into the `cod-iteration` subagent's prompt. This could manipulate the subagent's behavior, leading to unintended actions, data leakage, or denial of service. Implement robust input sanitization and validation for all user-controlled text that is incorporated into LLM prompts. Consider using prompt templating libraries that enforce strict separation between instructions and user data. If possible, structure prompts to minimize the LLM's ability to interpret user input as instructions. | LLM | SKILL.md:75 |
Scan History
Embed Code
[](https://skillshield.io/report/83a9f3736f05ce0f)
Powered by SkillShield