Trust Assessment
search-reddit received a trust score of 48/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 6 findings: 2 critical, 1 high, 2 medium, and 1 low severity. Key findings include Unsafe deserialization / dynamic eval, Node lockfile missing, User input directly injected into LLM prompt.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 25/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | User input directly injected into LLM prompt The user-provided search query (`args.query`) is directly interpolated into the `REDDIT_SEARCH_PROMPT` string using simple string replacement. This allows a malicious user to craft a query that includes instructions to the LLM, potentially overriding its intended behavior, extracting sensitive information, or manipulating its output format. Implement robust input sanitization or use a templating engine that strictly separates user input from prompt instructions. Consider using a structured prompt format or a dedicated prompt injection defense library to prevent user input from being interpreted as instructions by the LLM. | LLM | scripts/search.js:300 | |
| CRITICAL | LLM-generated tool arguments injected into nested prompt The LLM's `toolCall.function.arguments` are parsed as JSON, and the `toolArgs.query` extracted from this is then directly injected into a *new* prompt for a nested `callOpenAI` invocation: ``Perform a web search for: ${toolArgs.query}``. If the initial prompt injection allows the LLM to craft a malicious `toolArgs.query`, this malicious query will then be injected into the nested prompt, leading to a second-order prompt injection. This could further manipulate the LLM's behavior or facilitate data exfiltration. Sanitize or strictly validate `toolArgs.query` before injecting it into any further prompts. Ensure that all LLM-generated content used in subsequent LLM calls is treated as untrusted data and is not directly concatenated into new prompts without proper validation. | LLM | scripts/search.js:340 | |
| HIGH | Nested web_search tool call lacks domain restriction When the LLM decides to use the `web_search` tool, the skill makes a *nested* call to `callOpenAI` with a new prompt. The `tools` argument for this nested call includes the `web_search` tool definition. However, this tool definition does not explicitly set `allowed_domains: ['reddit.com']` as a fixed parameter for the tool call. This means a malicious LLM (potentially manipulated by prompt injection) could specify `allowed_domains: ["malicious.com"]` in its tool call arguments, allowing the `web_search` tool to search arbitrary domains and potentially exfiltrate data or access unauthorized resources, despite the `SKILL.md` stating 'Allowed domain: `reddit.com`'. Ensure that all invocations of the `web_search` tool, including nested calls, explicitly restrict `allowed_domains` to `['reddit.com']` or other safe domains. This can be achieved by passing the `allowed_domains` parameter with the desired value directly in the `tool_choice` or `tools` argument of the `callOpenAI` function, overriding any LLM-provided values for this parameter. | LLM | scripts/search.js:340 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/arkaydeus/search-reddit/scripts/search.js:186 | |
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/arkaydeus/search-reddit/scripts/search.js:227 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/arkaydeus/search-reddit/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/b03206a24712d88a)
Powered by SkillShield