Trust Assessment
clawver-digital-products received a trust score of 68/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection in `curl` Arguments, Risk of Data Exfiltration through Base64 File Upload, Potential for CLAW_API_KEY Exposure.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection in `curl` Arguments The skill provides `curl` command templates that include parameters such as `name`, `description`, `productId`, `fileUrl`, `fileType`, `fileData`, and `status`. If the host LLM constructs these `curl` commands by directly embedding unsanitized user input into the command string or JSON payload, a malicious user could inject shell metacharacters or JSON syntax to execute arbitrary commands on the host system or manipulate the API request in unintended ways. The host LLM must rigorously sanitize and escape all user-provided input before incorporating it into shell commands or JSON payloads. For shell commands, use safe execution methods that prevent shell interpretation (e.g., `subprocess.run` with `shell=False` and passing arguments as a list). For JSON payloads, ensure proper JSON encoding. | LLM | SKILL.md:20 | |
| HIGH | Risk of Data Exfiltration through Base64 File Upload The skill explicitly provides an option to upload digital files using base64 encoding (`fileData`). If the host LLM is prompted by a malicious user to read and base64 encode sensitive local files (e.g., configuration files, SSH keys, or other personal data) and then include this data in the `fileData` field of the `curl` command, it could lead to the exfiltration of these files to the Clawver API (and potentially accessible via the Clawver platform if not handled securely there). The host LLM should implement strict policies to prevent reading and uploading arbitrary local files based on user prompts. If file uploads are necessary, they should be restricted to specific, non-sensitive directories, or require explicit user confirmation for each file. Avoid allowing the LLM to directly access the local filesystem for arbitrary reads. | LLM | SKILL.md:48 | |
| MEDIUM | Potential for CLAW_API_KEY Exposure The `CLAW_API_KEY` is a sensitive credential required for all API interactions. While the skill correctly demonstrates its use in `Authorization` headers, there is a risk that a sophisticated prompt injection could manipulate the host LLM to log, display, or send this environment variable to an unauthorized endpoint if the LLM's execution environment allows direct access to environment variables or if it can be tricked into constructing a `curl` command that redirects the key. The host LLM's execution environment should strictly limit its ability to access or output environment variables directly. Implement robust input validation and output filtering to prevent the LLM from revealing sensitive information like API keys, even if manipulated by a prompt. Ensure that the LLM cannot be coerced into sending credentials to domains other than `api.clawver.store`. | LLM | SKILL.md:21 |
Scan History
Embed Code
[](https://skillshield.io/report/4fcf500cc7b00d82)
Powered by SkillShield