Trust Assessment
doc-search received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Command Injection via `eval` in quick_search.sh, Excessive Permissions and Data Exfiltration in quick_search.sh, Excessive Permissions and Data Exfiltration in Python scripts.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via `eval` in quick_search.sh The `quick_search.sh` script uses `eval` to execute `ripgrep` with user-controlled `QUERY` and `SEARCH_PATH` variables. This allows an attacker to inject arbitrary shell commands by crafting a malicious query or path. For example, a query like `'; rm -rf /; #` would be executed by the shell. Avoid using `eval` with user-controlled input. Instead, construct the command as a list of arguments and use `exec` or `subprocess.run` (if in Python) or pass arguments directly to the command without `eval`. Ensure all user-provided arguments are properly quoted and escaped. | LLM | scripts/quick_search.sh:22 | |
| HIGH | Excessive Permissions and Data Exfiltration in quick_search.sh The `quick_search.sh` script allows searching an arbitrary `SEARCH_PATH` provided by the user. This, combined with the command injection vulnerability, means an attacker can direct the script to search sensitive directories (e.g., `/etc`, `/root`) and exfiltrate their contents through the script's output. Even without command injection, a malicious user could search for sensitive patterns (e.g., 'password', 'API_KEY') across the entire filesystem if the agent allows arbitrary path input. Restrict the `SEARCH_PATH` to a predefined, safe directory or a limited set of directories. Implement strict input validation for `SEARCH_PATH` to prevent directory traversal or access to sensitive system paths. Ensure the script runs with the least necessary privileges. | LLM | scripts/quick_search.sh:7 | |
| MEDIUM | Excessive Permissions and Data Exfiltration in Python scripts Both `scripts/indexer.py` and `scripts/search.py` accept a `path` argument that is resolved (`Path(path).resolve()`) and used for recursive file system operations (`rglob('*')`). If the AI agent allows arbitrary user input for this `path`, a malicious user could direct the scripts to read and process files from sensitive directories (e.g., `/etc`, `/root`). `indexer.py` stores truncated content in an index file, and `search.py` returns content and context, potentially leading to data exfiltration if the output is exposed. Implement strict input validation for the `path` argument to ensure it points only to allowed, non-sensitive directories. Consider sandboxing the execution environment or running the scripts with minimal file system permissions. Limit the scope of `rglob` to specific file types or directories if possible. | LLM | scripts/indexer.py:133 |
Scan History
Embed Code
[](https://skillshield.io/report/4c0f6d8bd6b72312)
Powered by SkillShield