Trust Assessment
liblib-ai-gen received a trust score of 91/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 2 findings: 0 critical, 0 high, 1 medium, and 1 low severity. Key findings include Suspicious import: requests, User-provided URLs sent to external API, enabling potential SSRF.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/xtaq/liblib-ai-gen/scripts/liblib_client.py:13 | |
| LOW | User-provided URLs sent to external API, enabling potential SSRF The skill allows users to provide URLs for reference images (`--ref-images` for image generation) and start/end frames (`--start-frame`, `--end-frame` for image-to-video generation). These URLs are passed directly to the LiblibAI API. This means the external LiblibAI service will attempt to fetch content from these user-specified URLs. A malicious user could provide a URL pointing to an internal network resource (e.g., `http://localhost/admin` or `file:///etc/passwd` if the external service supports it) that the LiblibAI service might have access to, potentially leading to Server-Side Request Forgery (SSRF) against the LiblibAI service's environment, or exposing internal resources if the LiblibAI service is hosted in a way that allows it to access the skill's local network. While the skill itself does not directly exfiltrate data, it acts as a conduit for the external API to access arbitrary user-provided URLs. Implement URL validation (e.g., whitelist allowed domains, check for private IP ranges, or use a proxy to fetch and validate content before sending to the external API) to prevent the external service from accessing unintended resources. Ensure the external API itself has robust SSRF protections. | LLM | scripts/liblib_client.py:130 |
Scan History
Embed Code
[](https://skillshield.io/report/a0ea21db200d43bd)
Powered by SkillShield