Trust Assessment
defuddle received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via User-Controlled URL.
The analysis covered 4 layers: manifest_analysis, llm_behavioral_safety, static_code_analysis, dependency_graph. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 15, 2026 (commit 3e75fabd). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via User-Controlled URL The skill documentation provides examples of executing the `defuddle` CLI tool with a user-provided URL (e.g., `defuddle parse <url> --md`). If the LLM directly constructs and executes shell commands based on these examples, and the `<url>` parameter is not properly sanitized or validated before execution, an attacker could inject arbitrary shell commands. For instance, providing a URL like `http://example.com; rm -rf /` could lead to malicious command execution on the host system. When implementing the tool call, ensure that any user-provided input, especially URLs, is rigorously sanitized and validated before being incorporated into a shell command. Consider using a safe command execution mechanism that prevents shell metacharacter interpretation, or explicitly whitelist allowed characters and escape all others. If possible, pass the URL as a distinct argument to the `defuddle` tool rather than embedding it directly into a raw shell string. | Unknown | SKILL.md:13 |
Scan History
Embed Code
[](https://skillshield.io/report/2d8b46ea084e38fe)
Powered by SkillShield