Trust Assessment
2captcha received a trust score of 81/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unsafe direct script download and execution during installation, Potential command injection via unsanitized arguments to external tool.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unsafe direct script download and execution during installation The skill instructs the user to download and execute a script directly from a remote GitHub URL (`raw.githubusercontent.com`) using `curl` and `chmod +x`. This pattern is highly susceptible to supply chain attacks. If the remote script is compromised, the user's system could be compromised without any integrity checks (e.g., checksum verification) before execution. An AI agent following these installation instructions would be vulnerable. Avoid direct execution of remote scripts. Recommend using a package manager, verifying script integrity with checksums, or reviewing the script manually before execution. For automated agents, consider sandboxed environments or pre-approved binaries. | LLM | SKILL.md:10 | |
| MEDIUM | Potential command injection via unsanitized arguments to external tool The skill demonstrates calling an external CLI tool (`./solve-captcha`) with various arguments (e.g., file paths, URLs, site keys, text). If an AI agent dynamically constructs these arguments based on untrusted user input or external data without proper sanitization, an attacker could inject malicious shell commands. For example, a crafted file path or URL could contain shell metacharacters that lead to arbitrary command execution. Implement robust input validation and sanitization for all arguments passed to external command-line tools. Avoid directly concatenating untrusted input into shell commands. Use safe execution methods (e.g., `subprocess.run` with `shell=False` in Python) that prevent shell interpretation of arguments. | LLM | SKILL.md:38 |
Scan History
Embed Code
[](https://skillshield.io/report/f36cfebfea1c50fb)
Powered by SkillShield