Trust Assessment
fabric-pattern received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 2 critical, 1 high, 2 medium, and 0 low severity. Key findings include Missing required field: name, LLM instructed to execute arbitrary instructions from user-influenced `system.md` files, User-controlled `yt-dlp-args` allows arbitrary command execution.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 18/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | LLM instructed to execute arbitrary instructions from user-influenced `system.md` files The skill explicitly instructs the LLM to read the content of `~/.config/fabric/patterns/"pattern_name"/system.md` and use it as "strict instruction/persona". The `pattern_name` can be influenced by user input. If a malicious user can control the content of these `system.md` files (e.g., by creating a custom pattern or exploiting a path traversal vulnerability), they can inject arbitrary instructions into the host LLM, leading to prompt injection. Do not allow the LLM to directly adopt instructions from user-influenced or external files. Instead, parse the `system.md` content and extract specific, safe parameters or data, then integrate them into a predefined, safe prompt template. Implement strict sanitization and validation for `pattern_name` to prevent path traversal. | LLM | SKILL.md:11 | |
| CRITICAL | User-controlled `yt-dlp-args` allows arbitrary command execution The skill instructs the LLM to use `fabric -y "URL"` with support for `--yt-dlp-args="..."`. The `yt-dlp-args` are directly passed from user input. `yt-dlp` supports an `--exec` flag which allows arbitrary command execution. An attacker can inject malicious shell commands via this argument, leading to full command injection on the host system. Never pass user-controlled input directly as arguments to shell commands or powerful tools like `yt-dlp` without strict sanitization and validation. Specifically, disallow or filter out dangerous flags like `--exec` from `yt-dlp-args`. Consider using an allowlist of safe `yt-dlp` arguments instead of a blocklist. | LLM | SKILL.md:26 | |
| HIGH | User-controlled arguments passed directly to `fabric` CLI without explicit sanitization The skill instructs the LLM to execute `fabric` CLI commands with user-controlled arguments such as `URL` for `fabric -u` and `fabric -y`, and `question` for `fabric -q`. If the `fabric` CLI tool itself does not sufficiently sanitize these arguments, or if the LLM constructs the command string without proper escaping, an attacker could inject shell commands. For example, a malicious URL like `"; rm -rf /; #"` could lead to command injection. Ensure all user-controlled inputs passed to shell commands are strictly sanitized and properly escaped for the target shell. Prefer using a library or API that handles argument passing securely rather than direct string concatenation for command execution. | LLM | SKILL.md:28 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/apuryear/fabric-pattern/SKILL.md:1 | |
| MEDIUM | User's "question" sent to external Jina AI service The skill instructs the LLM to use `fabric -q "question"` for context search, explicitly stating it uses "Jina AI". This means the user's "question" (which could contain sensitive information) is sent to an external third-party service (Jina AI). This constitutes data exfiltration if the user is unaware or has not consented to their data being shared with Jina AI. Inform users that their search queries will be sent to Jina AI. Provide an option to disable this feature or use a local search mechanism if privacy is a concern. Ensure that no personally identifiable information (PII) or highly sensitive data is inadvertently included in the "question" sent to Jina AI. | LLM | SKILL.md:29 |
Scan History
Embed Code
[](https://skillshield.io/report/4c47a80d50f8400b)
Powered by SkillShield