Trust Assessment
paperless received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 2 high, 2 medium, and 0 low severity. Key findings include Potential Command Injection via CLI arguments, Sensitive Document Content Exfiltration Risk, Arbitrary Local File Write Capability.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 56/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via CLI arguments The skill interacts with the `ppls` CLI tool, which takes various string arguments (e.g., `--title`, `--output`). If user-provided input is directly interpolated into these arguments without proper sanitization, an attacker could inject shell metacharacters, leading to arbitrary command execution on the host system. For example, a malicious title like `"; rm -rf /; echo "` could be executed if not properly escaped. Implement robust input sanitization and validation for all user-provided strings before they are passed as arguments to `ppls` CLI commands. Consider using a library that safely escapes shell arguments or using a programmatic API for `ppls` if available, instead of direct shell execution. | LLM | SKILL.md:69 | |
| HIGH | Sensitive Document Content Exfiltration Risk The skill can download documents (`ppls documents download`) and retrieve their full details, including OCR content (`ppls documents show --json`). If the LLM is compromised or misused, it could be prompted to download sensitive documents to the local filesystem and then, if it has access to other tools (e.g., file reading, network requests), exfiltrate their content. Implement strict access controls and data handling policies for downloaded documents. Ensure the LLM's execution environment is sandboxed and cannot exfiltrate local files. Limit the LLM's ability to choose arbitrary download paths. Consider redacting sensitive information from document content before it's processed by the LLM. | LLM | SKILL.md:55 | |
| MEDIUM | Arbitrary Local File Write Capability The `ppls documents download` command allows specifying an arbitrary output path (`--output` or `--output-dir`). This grants the skill the ability to write files to any location on the local filesystem, potentially overwriting critical system files or writing to sensitive directories if the LLM is instructed to do so. Restrict the skill's ability to write files to a designated, sandboxed directory. Do not allow user-controlled input to dictate the full output path. Implement a whitelist of allowed directories or enforce a specific output directory. | LLM | SKILL.md:55 | |
| MEDIUM | Unpinned npm Dependency The skill's manifest specifies `npm install -g @nickchristensen/ppls` without a specific version. This means the latest version of the package will always be installed. If a future version introduces a vulnerability, breaking change, or malicious code (e.g., due to a compromise of the package maintainer's account), the skill would automatically inherit this risk without explicit review. Pin the dependency to a specific, known-good version (e.g., `@nickchristensen/ppls@1.2.3`) to ensure reproducibility and prevent unexpected changes or malicious updates. Regularly review and manually update dependencies. | LLM | SKILL.md |
Scan History
Embed Code
[](https://skillshield.io/report/57e61ded1f959184)
Powered by SkillShield