Security Audit
azure-ai-document-intelligence-dotnet
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
azure-ai-document-intelligence-dotnet received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 0 high, 1 medium, and 1 low severity. Key findings include Unrestricted URI input for document analysis and model training, Skill requires sensitive credentials via environment variables.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Unrestricted URI input for document analysis and model training The skill demonstrates using `AnalyzeDocumentAsync` and `BlobContentSource` with `Uri` inputs (e.g., `invoiceUri`, `fileUri`, `blobContainerUri`). If the AI agent allows untrusted user input to directly control these URIs, it could lead to several risks:
1. **Data Exfiltration**: The skill could be coerced into fetching sensitive documents from internal network resources or cloud storage (if the agent has network access) and then processing them, potentially exposing their content.
2. **Malicious Content Ingestion**: The skill could fetch and process malicious files from attacker-controlled URLs, potentially leading to further vulnerabilities if the processing library has flaws.
3. **SSRF (Server-Side Request Forgery)**: If the agent executes this code on a server, an attacker could use crafted URIs to scan internal networks or access metadata services.
The examples show public `https://example.com` URLs, but the underlying SDK calls accept any valid `Uri`. Implement strict validation and sanitization for all URI inputs. Consider whitelisting allowed domains/protocols, restricting access to internal networks, and ensuring that any fetched content is handled securely. For `blobContainerUri`, ensure that the SAS URL is generated with minimal necessary permissions and a short expiry, and that the agent does not allow arbitrary user-provided SAS URLs without validation. | LLM | SKILL.md:90 | |
| LOW | Skill requires sensitive credentials via environment variables The skill's examples demonstrate retrieving sensitive credentials, specifically `DOCUMENT_INTELLIGENCE_API_KEY` and `BLOB_CONTAINER_SAS_URL`, from environment variables. While this is a standard and often recommended practice for cloud SDKs, it means the skill's execution environment must be carefully secured to prevent these credentials from being exposed or misused. The `BLOB_CONTAINER_SAS_URL` is particularly sensitive as it grants direct, time-limited access to a storage container, potentially containing training data or other sensitive information. If the agent's execution environment is compromised, or if the skill were to be modified to exfiltrate these variables, it would pose a significant risk. Ensure the AI agent's execution environment adheres to the principle of least privilege. Environment variables containing secrets should be managed securely (e.g., using secret management services). For `BLOB_CONTAINER_SAS_URL`, ensure that SAS tokens are generated with the minimum required permissions and shortest possible expiry, and are not exposed to untrusted users. Prefer `DefaultAzureCredential` (Managed Identity) over API keys where possible, as recommended by the skill itself. | LLM | SKILL.md:40 |
Scan History
Embed Code
[](https://skillshield.io/report/9d06c84498b0b46d)
Powered by SkillShield