Trust Assessment
sutui-ai received a trust score of 95/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 1 finding: 0 critical, 0 high, 1 medium, and 0 low severity. Key findings include Potential Server-Side Request Forgery (SSRF) via 'upload_image' tool.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Potential Server-Side Request Forgery (SSRF) via 'upload_image' tool The 'upload_image' tool accepts an 'image_url' parameter, which is then processed by the 'user-速推AI' backend service. If the backend service does not properly validate or sanitize this URL, a malicious actor could provide an internal or sensitive URL (e.g., 'file:///etc/passwd', 'http://localhost/admin', cloud metadata endpoints like 'http://169.254.169.254/latest/meta-data/') to probe the internal network or access sensitive resources on the 'user-速推AI' server. This skill exposes the interface for the LLM to trigger such a request, creating a potential vector for SSRF. The backend service processing the 'image_url' for the 'upload_image' tool must implement strict URL validation and sanitization. This includes whitelisting allowed schemes and domains, or ensuring the fetching mechanism is isolated and cannot access internal resources or sensitive endpoints. Consider proxying requests through a service that prevents SSRF. | LLM | SKILL.md:103 |
Scan History
Embed Code
[](https://skillshield.io/report/330667bbaeb46ae5)
Powered by SkillShield