Trust Assessment
ai-rag-pipeline received a trust score of 67/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Untrusted input directly interpolated into LLM prompt, Overly broad Bash permissions granted, Unverified script execution from remote URL.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Untrusted input directly interpolated into LLM prompt The `research()` function in the 'Pipeline Templates' section directly interpolates user-provided `$query` into the `query` field of `tavily/search-assistant` and the `prompt` field of an LLM call (`openrouter/claude-haiku-45`). An attacker controlling the `$query` input could inject malicious instructions into the LLM prompt or manipulate the search query by crafting a payload that breaks out of the JSON string (e.g., `", "prompt": "malicious instruction"}`). The `results` from the search are also directly interpolated into the subsequent LLM call, creating a potential second-stage injection vector if the search results can be influenced by the initial query. This could lead to prompt injection, data exfiltration, or unintended LLM behavior. Sanitize or escape user input (`$query`, `$results`) before embedding it into JSON strings, especially when those strings are passed to LLMs. Consider using a JSON library or a templating engine that handles escaping automatically. For LLM prompts, use a structured input format that separates user input from system instructions, or employ prompt templating with strict variable substitution. | LLM | SKILL.md:228 | |
| HIGH | Unverified script execution from remote URL The 'Quick Start' section instructs users to install the `inference.sh` CLI by piping a script downloaded via `curl` directly into `sh` (`curl -fsSL https://cli.inference.sh | sh`). This practice is highly risky as it executes code from an external source without prior review or verification. If the `inference.sh` server or the script itself were compromised, an attacker could execute arbitrary code on the user's machine, leading to system compromise, data exfiltration, or other malicious activities. Advise users to download the script, review its contents, and then execute it, or provide alternative installation methods that involve package managers with integrity checks (e.g., `apt`, `yum`, `brew`, `npm`). If direct execution is deemed necessary, implement strong integrity checks (e.g., verifying a cryptographic hash of the script before execution). | LLM | SKILL.md:11 | |
| MEDIUM | Overly broad Bash permissions granted The skill declares `Bash(infsh *)` as an allowed tool. While the skill primarily demonstrates using `infsh app run`, the `*` wildcard grants permission to execute any `infsh` subcommand (e.g., `infsh login`, `infsh app list`, `infsh app delete`, `infsh config set`). This violates the principle of least privilege, as the skill's core functionality (building RAG pipelines) appears to only require `infsh app run`. A malicious skill could exploit these broader permissions to perform unintended actions on the user's `inference.sh` account or configuration. Restrict the `Bash` permission to only the necessary `infsh` subcommands, e.g., `Bash(infsh app run *)`, if other `infsh` commands are not strictly required for the skill's operation. | LLM | SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/e1f19b7626816a99)
Powered by SkillShield