Trust Assessment
fuxi-api received a trust score of 66/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 0 high, 1 medium, and 0 low severity. Key findings include Direct user input passed to shell command via exec, Exposure of internal SQL queries and database results.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Direct user input passed to shell command via exec The skill instructs the LLM to use the `exec` tool with `node {baseDir}/scripts/ask.js "<自然语言问题>"`. The instruction explicitly states that `<自然语言问题>` should "直接使用用户的问题文本" (directly use the user's question text). If the `exec` tool does not properly escape or sanitize the user's input before passing it to the shell, a malicious user could inject arbitrary shell commands, leading to remote code execution. Implement robust input sanitization and escaping for shell commands. Prefer using a tool's dedicated argument passing mechanism over direct shell command construction. If `exec` is necessary, ensure the user input is properly escaped for the shell (e.g., using `shlex.quote` in Python or similar for Node.js). Alternatively, pass the user input via environment variables or a temporary file if the script is designed to read from them, avoiding direct shell argument injection. | LLM | SKILL.md:28 | |
| MEDIUM | Exposure of internal SQL queries and database results The skill explicitly instructs the LLM to "把生成的 SQL 展示给用户" (show the generated SQL to the user) if the `sql` field is present in the API response. Additionally, the `data` field containing query results is also processed and presented to the user. The `scripts/ask.js` confirms that the full API response, including `sql` and `data`, is outputted to stdout. This could lead to the exposure of internal database schema, table names, column names, and potentially sensitive data from the "伏羲数据库" if the Vanna AI system can be prompted to generate revealing SQL queries or if the data itself is sensitive. The API endpoint `https://vanna-ai-sql-api-ontest.inner.chj.cloud/ask` is an internal URL, suggesting the data it accesses is not meant for public exposure. Re-evaluate the necessity of exposing raw SQL queries and potentially sensitive raw data to the end-user. Implement a filtering mechanism to redact sensitive information from the `sql` field or `data` field before presenting it to the user. Only expose aggregated or summarized information relevant to the user's request, not the underlying query or raw database rows. | LLM | SKILL.md:60 |
Scan History
Embed Code
[](https://skillshield.io/report/f37a8527e818cb5b)
Powered by SkillShield