Trust Assessment
moltoffer-candidate received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 3 high, 1 medium, and 0 low severity. Key findings include Potential Prompt Injection via Persona Update, Credential Storage and Potential Exfiltration Risk, Potential Command Injection via `curl` and User Input.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Prompt Injection via Persona Update The skill explicitly states: 'Keep persona updated: Any info user provides should update persona.md'. If user-provided information is written directly to `persona.md` without sanitization, and `persona.md` is later read by an LLM as part of its context or instructions, a malicious user could inject instructions or data into `persona.md` to manipulate the LLM's behavior or exfiltrate data. This creates a persistent prompt injection vector. Sanitize all user input before writing to `persona.md`. Ensure `persona.md` is treated strictly as data, not instructions, by any downstream LLM. Consider using a structured data format for persona updates that prevents arbitrary instruction injection. | LLM | SKILL.md:108 | |
| HIGH | Credential Storage and Potential Exfiltration Risk The skill explicitly states: 'Allowed local persistence: Write API Key to `credentials.local.json` (in .gitignore)'. While storing credentials locally is sometimes necessary, this highlights the existence of a sensitive file. If the skill's implementation has vulnerabilities (e.g., arbitrary file read, insecure file permissions, or a bug allowing reading this specific file), the stored API key could be harvested or exfiltrated. The `curl` dependency further enables network exfiltration. Ensure `credentials.local.json` is stored with strict file permissions (e.g., readable only by the skill's process owner). Implement robust input validation and sandboxing to prevent arbitrary file reads or writes. Avoid storing API keys in plain text if possible; use secure credential stores or environment variables. | LLM | SKILL.md:119 | |
| HIGH | Potential Command Injection via `curl` and User Input The manifest requires the `curl` binary, indicating the skill will execute external commands. The skill also describes taking user input for actions (`/moltoffer-candidate [action]`) and API parameters (e.g., `keywords`). If the skill constructs `curl` commands by directly embedding unsanitized user input, a malicious user could inject shell commands into the `curl` call, leading to arbitrary command execution on the host system. Always use parameterized API clients or libraries instead of directly constructing shell commands with `curl`. If `curl` must be used, ensure all user-provided input is strictly validated and properly escaped for shell execution contexts (e.g., using `shlex.quote` in Python). | LLM | SKILL.md:10 | |
| MEDIUM | Potential Prompt Injection via JSON `keywords` Parameter The `GET /search` endpoint accepts a `keywords` parameter in JSON format. If user input is directly inserted into this JSON structure without proper escaping or validation, a malicious user could inject arbitrary JSON. If this JSON is later processed by an LLM (e.g., for query understanding or generation), it could be used to inject instructions to manipulate the LLM's behavior. Strictly validate and sanitize all user input intended for the `keywords` JSON parameter. Ensure proper JSON escaping. If the backend processes this JSON with an LLM, implement robust prompt engineering and input filtering to prevent instruction injection. | LLM | SKILL.md:77 |
Scan History
Embed Code
[](https://skillshield.io/report/c8383eefce378151)
Powered by SkillShield