Trust Assessment
perplexity received a trust score of 35/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 2 critical, 2 high, 0 medium, and 0 low severity. Key findings include Remote code execution: curl/wget pipe to shell, Command Injection via Python Script Interpolation, Potential Data Exfiltration via Command Injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Remote code execution: curl/wget pipe to shell Detected a pattern that downloads and immediately executes remote code. This is a primary malware delivery vector. Never pipe curl/wget output directly to a shell interpreter. | Static | skills/dronnick/perplexity-sonar/scripts/perplexity_search.sh:148 | |
| CRITICAL | Command Injection via Python Script Interpolation The `scripts/perplexity_search.sh` script constructs a JSON payload by directly interpolating user-controlled variables (`$SYSTEM_PROMPT` and `$QUERY`) into a Python script executed via `python3 -c`. An attacker can inject arbitrary Python code by crafting input that breaks out of the triple-quoted string literals. This allows for execution of shell commands, file system access, and exfiltration of environment variables or other sensitive data. Do not directly interpolate user-controlled input into a `python3 -c` command. Instead, pass user input as arguments to a Python script and use `json.dumps()` to safely serialize them into JSON. Alternatively, use a tool like `jq` for robust JSON construction from shell variables, or ensure all user input is rigorously escaped for Python string literals before interpolation (though this is complex and error-prone). | LLM | scripts/perplexity_search.sh:100 | |
| HIGH | Potential Data Exfiltration via Command Injection As a direct consequence of the command injection vulnerability in `scripts/perplexity_search.sh`, an attacker can execute arbitrary Python code. This code can access environment variables, including `PERPLEXITY_API_KEY`, and exfiltrate them to an external server or write them to a file accessible to the attacker. This poses a significant risk of credential compromise and unauthorized data disclosure. Address the underlying command injection vulnerability. Ensure that sensitive data like API keys are never accessible to processes that handle untrusted user input without proper sanitization and isolation. | LLM | scripts/perplexity_search.sh:100 | |
| HIGH | Prompt Injection via Command Injection The command injection vulnerability in `scripts/perplexity_search.sh` allows an attacker to manipulate the JSON payload sent to the Perplexity API. By injecting Python code, an attacker can alter the `messages` array, effectively injecting arbitrary system or user prompts to the LLM. This can lead to the LLM performing unintended actions, generating malicious content, or revealing sensitive information. Mitigate the command injection vulnerability to prevent manipulation of the API request body. All user-provided prompts should be treated as untrusted and properly escaped or validated before being included in the LLM's input. | LLM | scripts/perplexity_search.sh:100 |
Scan History
Embed Code
[](https://skillshield.io/report/4bfb3701468ce9ec)
Powered by SkillShield