Trust Assessment
tdd-green received a trust score of 66/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Missing required field: name, Shell Command Execution and Data Exfiltration, Prompt Injection via Unsanitized User Input ($ARGUMENTS).
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 15, 2026 (commit 1823c3f6). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Shell Command Execution and Data Exfiltration The skill explicitly executes a shell command using the `!` prefix. This command attempts to read content from local files located in potentially sensitive directories such as `.specweave/skill-memories`, `.claude/skill-memories`, and `$HOME/.claude/skill-memories`. This poses a significant risk of data exfiltration, as the content of these files could be exposed. Additionally, direct shell command execution is a command injection vulnerability, even if the current command is for reading, as it demonstrates a capability that could be exploited if the skill definition were compromised. Avoid direct execution of shell commands within skill definitions. If file access is necessary, use a sandboxed and explicitly permissioned API. Ensure that any data read from local files is strictly controlled and not exposed to untrusted contexts. If the intent is to load configuration or memory, use a dedicated, secure mechanism for skill memory management. | LLM | SKILL.md:5 | |
| HIGH | Prompt Injection via Unsanitized User Input ($ARGUMENTS) The skill constructs a prompt for a 'general-purpose' subagent by directly interpolating the `$ARGUMENTS` variable without any sanitization or validation. This allows an attacker to inject malicious instructions or data into the subagent's prompt, potentially leading to prompt injection attacks. An attacker could manipulate the subagent's behavior, extract sensitive information, or generate undesirable outputs by crafting a malicious `$ARGUMENTS` value. Implement robust input validation and sanitization for all user-provided variables (like `$ARGUMENTS`) before they are incorporated into prompts for other LLMs or tools. Consider using techniques like input filtering, escaping, or structured data formats to prevent arbitrary instruction injection. If possible, pass user input as separate parameters rather than directly embedding it into the prompt string. | LLM | SKILL.md:13 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | plugins/specweave/skills/tdd-green/SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/fc8db356d1442c86)
Powered by SkillShield