Security Audit
azure-ai-vision-imageanalysis-java
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
azure-ai-vision-imageanalysis-java received a trust score of 70/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 0 medium, and 1 low severity. Key findings include Use of Beta Dependency Version, Potential for Local File Exfiltration via User-Controlled Paths, Server-Side Request Forgery (SSRF) and Data Exfiltration via User-Controlled URLs.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 68/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential for Local File Exfiltration via User-Controlled Paths The skill demonstrates reading local files using `BinaryData.fromFile(new File("image.jpg").toPath())`. If the filename or path is derived from untrusted user input, an attacker could specify arbitrary file paths (e.g., `/etc/passwd`, `~/.aws/credentials`) to read sensitive local files. The content of these files would then be sent to the Azure AI Vision service for analysis, effectively exfiltrating data. Implement strict input validation and sanitization for any file paths provided by users. Restrict file access to a designated, isolated directory. Avoid allowing arbitrary file paths from untrusted sources. | LLM | SKILL.md:56 | |
| HIGH | Server-Side Request Forgery (SSRF) and Data Exfiltration via User-Controlled URLs The skill demonstrates fetching images from a URL using `client.analyzeFromUrl("https://example.com/image.jpg", ...)`. If the `imageUrl` parameter is derived from untrusted user input, an attacker could provide internal network URLs (e.g., `http://localhost:8080/admin`, `http://169.254.169.254/latest/meta-data/`) to perform Server-Side Request Forgery (SSRF) attacks. This could lead to accessing internal resources, scanning internal ports, or exfiltrating sensitive data from internal services to the Azure AI Vision service. Implement strict URL validation to ensure only allowed external domains are accessed. Use a whitelist of permitted hosts or apply robust URL parsing and sanitization to prevent access to internal IP addresses, loopback addresses, and non-HTTP/HTTPS schemes. | LLM | SKILL.md:68 | |
| LOW | Use of Beta Dependency Version The skill uses a beta version (`1.1.0-beta.1`) of the `azure-ai-vision-imageanalysis` library. Beta versions may contain unpatched vulnerabilities, be unstable, or have breaking changes, posing a risk to the stability and security of applications in a production environment. Recommend using a stable, generally available (GA) version of the library for production environments. If a beta version is necessary, ensure thorough security review and monitoring. | LLM | SKILL.md:10 |
Scan History
Embed Code
[](https://skillshield.io/report/b6eb43f59496e7cd)
Powered by SkillShield