Trust Assessment
rag-search received a trust score of 74/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 1 high, 2 medium, and 0 low severity. Key findings include Missing required field: name, Hardcoded absolute paths to unmanaged external dependencies and data, Untrusted user query passed directly to external LLM clients.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Hardcoded absolute paths to unmanaged external dependencies and data The skill uses hardcoded absolute paths (`/root/.openclaw/workspace/rag_system/scripts` and `/root/.openclaw/workspace/rag_system/data/vectors.db`) to import external Python modules and access a database file. These external components are not part of the skill's package, are unversioned, and are located in a privileged `/root/` directory. This creates a significant supply chain risk, as the skill's security and integrity are entirely dependent on the security of these external, unmanaged files. An attacker who can modify these files could inject malicious code that the skill would then execute. Additionally, accessing `/root/` implies broader filesystem permissions than might be necessary for a skill. 1. Bundle `search_pipeline.py`, `embedding_client.py`, and `LiteVectorStore` (if it's a custom class) directly within the skill package or manage them as proper dependencies with version pinning. 2. Avoid hardcoding absolute paths, especially to privileged directories like `/root/`. Use relative paths or environment variables for configurable paths. 3. If external components are necessary, implement integrity checks (e.g., checksums) to verify their authenticity before use. 4. Re-evaluate the necessity of storing data in `/root/` and consider more appropriate, sandboxed locations. | LLM | handler.py:11 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/loda666/rag-search/SKILL.md:1 | |
| MEDIUM | Untrusted user query passed directly to external LLM clients The `run` function accepts a `query` string directly from untrusted user input. This `query` is then passed without sanitization or validation to `QwenEmbeddingClient().embed_text(query)` and `QwenRerankClient().rerank(query, ...)`. Given the names, `QwenEmbeddingClient` and `QwenRerankClient` are highly likely to be wrappers around Large Language Models (LLMs) or similar AI services. If these external services are not robustly designed to handle untrusted input and are susceptible to prompt injection, an attacker could craft a malicious `query` to manipulate the behavior of the underlying LLM, potentially leading to unintended actions, information disclosure, or denial of service. 1. Implement strict input validation and sanitization for the `query` parameter before passing it to any external LLM service. 2. Ensure that `QwenEmbeddingClient` and `QwenRerankClient` are designed with robust prompt injection defenses, such as input filtering, output moderation, or sandboxing. 3. If possible, use a dedicated, secure API for embedding and reranking that explicitly separates user input from system instructions. 4. Consider adding a layer of abstraction or a proxy that can enforce security policies on inputs to these external services. | LLM | handler.py:33 |
Scan History
Embed Code
[](https://skillshield.io/report/dff0a53d970f5de7)
Powered by SkillShield