Trust Assessment
pixiv received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 9 findings: 6 critical, 1 high, 1 medium, and 0 low severity. Key findings include Network egress to untrusted endpoints, Unpinned npm dependency version, Prompt Injection: LLM Instructed to Harvest Credentials.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings9
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints Axios POST/PUT to URL Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/matrix-meta/pixiv-skill/scripts/pixiv-app-publish.js:71 | |
| CRITICAL | Network egress to untrusted endpoints Axios POST/PUT to URL Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/matrix-meta/pixiv-skill/scripts/pixiv-cli.js:103 | |
| CRITICAL | Prompt Injection: LLM Instructed to Harvest Credentials The skill's setup instructions explicitly tell the host LLM to 'Ask the user for their Pixiv Refresh Token'. This is a direct prompt injection attempt, instructing the LLM to solicit sensitive user credentials (refresh token) from the user, which could then be used by the skill or logged by the LLM. Remove instructions for the LLM to directly solicit sensitive credentials. Instead, guide the user on how to provide the token securely (e.g., via an environment variable or a secure configuration mechanism that doesn't involve the LLM directly handling the token). | LLM | SKILL.md:10 | |
| CRITICAL | Command Injection via User-Controlled Arguments in Shell Execution The skill's usage examples and setup instructions involve direct execution of `node` scripts (`pixiv-cli.js`) via shell commands, where arguments like `<REFRESH_TOKEN>`, `<FILEPATH>`, `<TITLE>`, and `[TAGS_COMMA_SEPARATED]` are directly controlled by user input. If these arguments contain shell metacharacters (e.g., `;`, `&&`, `|`), they could lead to arbitrary command execution on the host system. For example, `node skills/pixiv/scripts/pixiv-cli.js login "token; rm -rf /"` could be exploited. Implement robust input validation and sanitization for all user-provided arguments before constructing and executing shell commands. Use safer alternatives to direct shell execution, such as Node.js's `child_process.spawn` with arguments passed as an array, which prevents shell interpretation. | LLM | SKILL.md:13 | |
| CRITICAL | Data Exfiltration via Arbitrary File Upload The `post` command in `pixiv-cli.js` and the `publishAppAPI` function in `pixiv-app-publish.js` both accept a `filepath` argument directly from user input. This `filepath` is then used without sufficient validation in `fs.createReadStream(filepath)` to read the content of the specified file and upload it to Pixiv. This allows an attacker to specify any file on the local filesystem that the Node.js process has read access to, leading to the exfiltration of sensitive data. Implement strict validation for `filepath` to ensure it points only to allowed, temporary, or user-specific directories. Ideally, the skill should not directly accept arbitrary file paths for upload. If file uploads are necessary, consider a secure file selection mechanism or a whitelist of allowed directories. | LLM | scripts/pixiv-cli.js:107 | |
| CRITICAL | Data Exfiltration via Arbitrary File Upload (Duplicate Finding) The `publishAppAPI` function in `pixiv-app-publish.js` also accepts a `filepath` argument directly from user input (`args[0]`) and uses it without sufficient validation in `fs.createReadStream(filepath)` to read and upload the file. This is a duplicate of the arbitrary file upload vulnerability found in `pixiv-cli.js`, indicating a consistent pattern of insecure file handling. Implement strict validation for `filepath` to ensure it points only to allowed, temporary, or user-specific directories. Ideally, the skill should not directly accept arbitrary file paths for upload. If file uploads are necessary, consider a secure file selection mechanism or a whitelist of allowed directories. | LLM | scripts/pixiv-app-publish.js:55 | |
| HIGH | Refresh Token Stored in Insecure Configuration File The Pixiv Refresh Token, a sensitive credential, is stored in `config.json` within the skill's directory. This file (`../config.json`) is not typically secured by default and could be accessible to other processes or compromised if the skill's directory is not properly protected. This increases the risk of credential exposure and unauthorized access to the user's Pixiv account. Avoid storing sensitive credentials directly in configuration files on the filesystem. Prefer using secure environment variables (`process.env.PIXIV_REFRESH_TOKEN` is already supported, which is better) or a dedicated secrets management service. If local storage is unavoidable, ensure the file has restrictive permissions and is encrypted. | LLM | scripts/pixiv-cli.js:16 | |
| MEDIUM | Unpinned npm dependency version Dependency '@ibaraki-douji/pixivts' is not pinned to an exact version ('^3.2.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/matrix-meta/pixiv-skill/package.json | |
| INFO | Shell Script Included in Skill Package The skill package includes a shell script (`pack.sh`). While this specific script is for packaging and does not directly process user input, its presence indicates that the skill environment supports shell execution. This capability, if exposed or misused (e.g., if the LLM is prompted to execute arbitrary shell commands), could lead to command injection vulnerabilities. Ensure that the LLM is strictly prevented from executing arbitrary shell commands. If shell scripts are necessary, they should be carefully reviewed and sandboxed, and their execution should be limited to predefined, safe operations. | LLM | scripts/pack.sh:1 |
Scan History
Embed Code
[](https://skillshield.io/report/956b8fd542bcb02b)
Powered by SkillShield