Trust Assessment
postproxy received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via User Arguments, Excessive File System Read Permissions via File Upload.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via User Arguments The skill declares 'Bash' as an allowed tool and includes a '$ARGUMENTS' placeholder directly within the skill's execution context. This allows an attacker to inject arbitrary shell commands by providing malicious input to the '$ARGUMENTS' variable, potentially leading to remote code execution, data exfiltration, or system compromise. The examples provided are `curl` commands, which are susceptible to argument injection. Avoid direct interpolation of user-provided arguments into shell commands. Instead, use a more structured approach, such as a dedicated tool or library that safely constructs API calls, or strictly validate and sanitize all user input before passing it to shell commands. If direct shell execution is unavoidable, use `printf %q` or similar mechanisms to properly quote arguments. | LLM | SKILL.md:100 | |
| HIGH | Excessive File System Read Permissions via File Upload The skill demonstrates a file upload mechanism using `curl -F 'media[]=@/path/to/image.jpg'`. Combined with the 'Bash' tool permission, this allows the skill to read arbitrary files from the local filesystem if a user provides a path. An attacker could exploit this to exfiltrate sensitive files (e.g., configuration files, SSH keys, environment variables) by specifying their paths. Restrict the skill's ability to read arbitrary files. If file uploads are necessary, implement strict validation of file paths, ensure files are only read from a designated, sandboxed directory, or use a dedicated file upload API that does not expose the underlying filesystem directly to user input. Consider if the skill truly needs direct filesystem access for uploads or if media URLs would suffice. | LLM | SKILL.md:59 |
Scan History
Embed Code
[](https://skillshield.io/report/55d8bd7ebabf1369)
Powered by SkillShield