Trust Assessment
ad-ready received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Server-Side Request Forgery (SSRF) via product URL, Arbitrary File Read leading to Data Exfiltration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Server-Side Request Forgery (SSRF) via product URL The skill accepts a `--product-url` argument which is used to fetch content via `httpx.get()`. If a malicious or internal URL (e.g., `http://localhost`, `http://169.254.169.254/latest/meta-data/`) is provided, the skill will attempt to fetch content from that URL. This content, including potentially sensitive internal network data or cloud metadata, could then be processed and uploaded to the external ComfyDeploy API, leading to data exfiltration. Implement a URL validation mechanism to restrict `product_url` and image URLs to known safe external domains. Block access to private IP ranges (e.g., 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 127.0.0.1/8, link-local addresses, cloud metadata IPs like 169.254.169.254). Consider using an allowlist for domains if possible. | LLM | scripts/generate.py:100 | |
| HIGH | Arbitrary File Read leading to Data Exfiltration The skill allows users to specify local file paths for `--product-image`, `--logo`, and `--reference` arguments. The `upload_file` function then reads the content of these user-provided paths and uploads them to the external ComfyDeploy API. This enables a malicious actor to read and exfiltrate arbitrary files from the system where the skill is executed (e.g., `/etc/passwd`, `~/.ssh/id_rsa`). Restrict file paths to a designated, sandboxed directory (e.g., a temporary upload directory). Validate that the provided `file_path` does not contain directory traversal sequences (e.g., `../`) and is within an allowed base directory. Alternatively, implement a file picker UI that prevents arbitrary path input. | LLM | scripts/generate.py:69 |
Scan History
Embed Code
[](https://skillshield.io/report/8b33321e51bec78b)
Powered by SkillShield