Trust Assessment
anydocs received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 8 findings: 2 critical, 2 high, 4 medium, and 0 low severity. Key findings include Network egress to untrusted endpoints, Missing required field: name, Suspicious import: requests.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings8
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/pektech/anydocs/cli.py:142 | |
| CRITICAL | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/pektech/anydocs/lib/scraper.py:50 | |
| HIGH | Server-Side Request Forgery (SSRF) via User-Controlled URLs The skill allows users to configure `base_url` and `sitemap_url` parameters (via `anydocs_config` and `anydocs_index` tools) which are then used to make HTTP/HTTPS requests in `lib/scraper.py` without sufficient validation. An attacker could provide internal IP addresses (e.g., `http://127.0.0.1`, `http://localhost`) or other internal network resources, allowing the skill to make requests to internal services, scan internal networks, or access sensitive data not intended for public exposure. The HTTPS validation only applies when `use_browser` and `gateway_token` are present, leaving standard HTTP requests vulnerable. Implement strict URL validation to prevent SSRF. This should include whitelisting allowed schemes (e.g., only `https`), blocking private IP ranges, and potentially whitelisting specific domains or disallowing redirects to untrusted locations. | LLM | lib/scraper.py:60 | |
| HIGH | Credential Exfiltration Risk via Malicious Gateway URL When `use_browser` is enabled and a `gateway_token` is provided (either via CLI argument or `OPENCLAW_GATEWAY_TOKEN` environment variable), the skill makes API calls to a user-controlled `gateway_url`. If an attacker can manipulate the `gateway_url` parameter, the sensitive `gateway_token` will be sent in the `Authorization` header to an attacker-controlled server, leading to credential exfiltration. Restrict the `gateway_url` to trusted local addresses (e.g., `127.0.0.1`, `localhost`) or a predefined whitelist of known, secure gateway URLs. Avoid allowing arbitrary user input for this sensitive configuration. | LLM | lib/scraper.py:109 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/pektech/anydocs/SKILL.md:1 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/pektech/anydocs/lib/scraper.py:3 | |
| MEDIUM | Regular Expression Denial of Service (ReDoS) The `_regex_search` function in `lib/indexer.py` directly compiles and uses a user-provided regular expression pattern (`query` parameter when `regex=True`). A malicious or overly complex regex pattern provided by an attacker could consume excessive CPU resources, leading to a denial of service for the skill's process. Implement a timeout for regular expression compilation and matching, or use a library that offers ReDoS protection. Consider restricting the complexity of user-provided regex patterns. | LLM | lib/indexer.py:70 | |
| MEDIUM | Excessive Permissions / Browser Sandbox Risk with Untrusted URLs The skill's `DiscoveryEngine` can use Playwright to render JavaScript-heavy pages when `use_browser` is enabled. Playwright downloads and runs a full browser (Chromium). Directing this browser to user-controlled `base_url` or `sitemap_url` (which can be malicious) introduces a significant attack surface. While the skill itself doesn't inject arbitrary JavaScript, a malicious website loaded in the browser could attempt to exploit browser vulnerabilities, consume excessive system resources (leading to denial of service), or perform other client-side attacks. The security of this operation heavily relies on the underlying Playwright sandbox and the execution environment. Ensure the execution environment where the skill runs Playwright is properly sandboxed and isolated from other system resources. Implement resource limits for browser processes. Clearly warn users about the risks of enabling browser rendering with untrusted or unverified URLs. | LLM | lib/scraper.py:159 |
Scan History
Embed Code
[](https://skillshield.io/report/692165c28cb208c2)
Powered by SkillShield