Trust Assessment
pinchsocial received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via Unsanitized User Input in Shell Commands.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via Unsanitized User Input in Shell Commands The skill documentation provides `curl` command examples that include placeholders for user-controlled input (e.g., `USERNAME`, `POST_ID`, `q`). If an AI agent skill is implemented by directly interpolating untrusted user input into these shell commands without proper sanitization or escaping, it could lead to command injection. A malicious user could inject arbitrary shell commands by providing specially crafted input for these parameters, potentially compromising the host system. Implement robust input validation and sanitization for all user-provided parameters (e.g., `USERNAME`, `POST_ID`, `q`) before constructing and executing shell commands. Prefer using secure HTTP client libraries (e.g., Python's `requests` library) to make API calls, which handle URL encoding and request construction safely, over direct shell command execution. If shell execution is unavoidable, ensure all user inputs are properly escaped and quoted for the target shell environment. | LLM | SKILL.md:54 |
Scan History
Embed Code
[](https://skillshield.io/report/708e010dbc5c66a9)
Powered by SkillShield