Trust Assessment
siyuan-task received a trust score of 13/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 8 findings: 3 critical, 3 high, 2 medium, and 0 low severity. Key findings include Network egress to untrusted endpoints, Suspicious import: urllib.request, Potential data exfiltration: file read + network send.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 3/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings8
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/zhhkheaven/siyuan-task-skill/SKILL.md:11 | |
| CRITICAL | Arbitrary File Read leading to Data Exfiltration The `attach_image_to_task` function in `task_ops.py` takes an `image_path` directly from user input (via `sys.argv` in the `main` function) and passes it to `siyuan_api.py::upload_asset`. The `upload_asset` function then opens and reads the file at the specified path using `open(fp, 'rb')`. This allows an attacker to specify any file path accessible to the skill's execution environment (e.g., `/etc/passwd`, `/root/.ssh/id_rsa`) and have its contents uploaded to the configured SiYuan instance, or a malicious server if the `SIYUAN_API_URL` is compromised. Sanitize or validate the `image_path` argument to ensure it points only to allowed image files within a restricted directory. Implement strict input validation to prevent path traversal (e.g., `../`) and ensure the path refers to an actual image file. Consider using a file picker or a temporary upload mechanism instead of direct path input. | LLM | scripts/siyuan_api.py:200 | |
| CRITICAL | SQL Injection Vulnerabilities Multiple functions in `task_ops.py` construct SQL queries by directly embedding unsanitized user-provided input (`row_id`, `task_name`, `status`) into the SQL statement string. This allows an attacker to inject malicious SQL code, potentially leading to unauthorized data access, modification, or deletion within the SiYuan Note database. For example, providing `row_id = ' OR 1=1 --'` could bypass intended row restrictions. Use parameterized queries or prepared statements for all SQL operations. If the SiYuan API does not support parameterized queries directly, ensure all user-provided input is strictly sanitized and escaped before being embedded into SQL strings. For `row_id`, validate it against expected ID formats (e.g., UUIDs) and ensure it's not empty. | LLM | scripts/task_ops.py:120 | |
| HIGH | Potential data exfiltration: file read + network send Function 'upload_asset' reads files and sends data over the network. This may indicate data exfiltration. Review this function to ensure file contents are not being sent to external servers. | Static | skills/zhhkheaven/siyuan-task-skill/scripts/siyuan_api.py:151 | |
| HIGH | Credential Harvesting via Malicious API Endpoint The `SiYuanClient` in `siyuan_api.py` sends the `SIYUAN_API_TOKEN` (loaded from `config.env`) in the `Authorization` header with every request to `SIYUAN_API_URL`. If an attacker can manipulate the `SIYUAN_API_URL` in `config.env` (e.g., through a prior command injection due to the skill's `Bash(python3:*)` permission), the API token will be exfiltrated to a malicious server controlled by the attacker. Implement robust validation and integrity checks for `config.env` to prevent unauthorized modification of `SIYUAN_API_URL`. Consider encrypting sensitive credentials like `SIYUAN_API_TOKEN` at rest. Ensure that the `SIYUAN_API_URL` is always a trusted endpoint and cannot be altered by untrusted input or compromised skill execution. | LLM | scripts/siyuan_api.py:49 | |
| HIGH | Path Traversal in Document Creation/Renaming The `create_task` and `rename_task` functions in `task_ops.py` use user-provided `task_name` and `new_name` respectively to construct document paths (e.g., `/{self.TASK_DOC_NAME}/{task_name}`). If the SiYuan API does not sufficiently sanitize these paths, an attacker could use path traversal sequences (e.g., `../`) to create or rename documents outside the intended `任务清单` directory, potentially leading to unauthorized file system manipulation within the SiYuan Note instance. Sanitize `task_name` and `new_name` inputs to remove or escape any path traversal characters (e.g., `/`, `\`, `..`). Ensure that the resulting path is canonicalized and validated against an allowed base directory before being passed to the SiYuan API. The SiYuan API itself should also perform robust path sanitization. | LLM | scripts/task_ops.py:160 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/zhhkheaven/siyuan-task-skill/scripts/siyuan_api.py:5 | |
| MEDIUM | Excessive Permissions Declared The skill declares `Bash(python3:*)` as an allowed tool. This grants the skill the ability to execute arbitrary Python 3 commands via the shell. While the skill's functionality relies on Python, this broad permission, especially when combined with the identified vulnerabilities (SQL Injection, Data Exfiltration, Credential Harvesting), significantly increases the attack surface and the potential impact of a successful exploit. An attacker could leverage this permission to execute arbitrary code on the host system. If possible, narrow down the `allowed-tools` permission to only the specific Python scripts or functions required, rather than allowing arbitrary Python execution. For example, `Bash(python3:scripts/task_ops.py)` if only that script is needed. Alternatively, implement a more granular permission model if the platform supports it, or ensure all Python scripts are thoroughly audited for vulnerabilities. | LLM | SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/7db6eec30543187a)
Powered by SkillShield