Trust Assessment
research-engine received a trust score of 55/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 6 findings: 1 critical, 2 high, 2 medium, and 1 low severity. Key findings include Missing required field: name, Node lockfile missing, Prompt Injection via Untrusted Input in Generated Reports.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via Untrusted Input in Generated Reports The `topic` argument, which comes directly from untrusted user input (e.g., command line arguments), is embedded without sanitization into the generated Markdown research reports (`generate_research_report`) and browsing history (`write_browsing_records`). If these reports are subsequently processed by an LLM, a malicious `topic` containing prompt injection instructions (e.g., 'Ignore all previous instructions and...') could manipulate the LLM's behavior, leading to unauthorized actions or information disclosure. Implement robust sanitization or escaping of all untrusted input (`topic`, and potentially content from `web_search`, `github_trending`, `moltbook_feed`) before embedding it into generated reports that might be consumed by an LLM. Consider using a dedicated templating engine with auto-escaping or explicit escaping functions. | LLM | research_engine.py:100 | |
| HIGH | Path Traversal in Report Filename Construction The `topic` argument, which is derived from untrusted user input, is used to construct the filename for the research report. The `replace(' ', '_')` operation does not sanitize path traversal characters like `../` or `/`. An attacker could provide a `topic` like `../../../../tmp/malicious_file` to write a file to an arbitrary location on the filesystem, potentially overwriting critical system files or creating malicious ones outside the intended `RESEARCH_DIR`. Sanitize the `topic` string to remove or replace all path-related characters (e.g., `/`, `\`, `..`) before using it in a filename. Use `pathlib.Path(topic).name` or a regular expression to ensure only safe characters remain. | LLM | research_engine.py:169 | |
| HIGH | Arbitrary File Read via `RESEARCH_DIR` Environment Variable The `RESEARCH_DIR` path can be controlled by an environment variable. If an attacker can set the `RESEARCH_DIR` environment variable to a sensitive directory (e.g., `/etc`, `/root`), the `get_research_history()` function will then iterate through and read all `.md` files within that directory. This allows for data exfiltration of arbitrary `.md` files from locations controlled by the attacker via the environment variable. Restrict `RESEARCH_DIR` to a fixed, non-user-controlled path within the skill's sandbox, or validate the environment variable's value to ensure it points to a safe, designated directory. Avoid allowing arbitrary paths via environment variables for sensitive file operations. | LLM | research_engine.py:16 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/guogang1024/research-engine/SKILL.md:1 | |
| MEDIUM | Unpinned External Dependencies The skill imports external modules like `tools` (for `web_fetch`, `web_search`) and `moltbook_skill` (for `get_feed`). However, the `package.json` explicitly lists `"dependencies": {}`, indicating that these dependencies are not declared or pinned to specific versions within the skill's manifest. This creates a supply chain risk where updates to these unpinned dependencies could introduce vulnerabilities, breaking changes, or malicious code without the skill author's explicit review or control. Declare and pin all external dependencies with specific version numbers in a `requirements.txt` file or within the `package.json` (if applicable for Python dependencies in this ecosystem). Implement a dependency scanning process to monitor for known vulnerabilities in declared dependencies. | LLM | research_engine.py:30 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/guogang1024/research-engine/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/b4d4cf216dd92c98)
Powered by SkillShield