Security Audit
agent-framework-azure-ai-py
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
agent-framework-azure-ai-py received a trust score of 41/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 3 findings: 0 critical, 1 high, 2 medium, and 0 low severity. Key findings include Unstable/Pre-release Dependencies Used, Agent Granted Arbitrary Code Execution Capability, Potential Data Exfiltration via File Search Tool.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Agent Granted Arbitrary Code Execution Capability The skill demonstrates the use of `HostedCodeInterpreterTool`, which explicitly grants the agent the capability to 'Execute Python code'. If the agent's prompts are not carefully controlled and sandboxed, this capability could be exploited by a malicious user to execute arbitrary code on the host system where the agent is running. This could lead to command injection, data exfiltration, or system compromise. While this is a feature of the tool, its inclusion in a skill package without explicit warnings about sandboxing or input validation constitutes a high-risk permission. Implement strict input validation and sanitization for any user input that could influence the code executed by the `HostedCodeInterpreterTool`. Ensure the execution environment for the code interpreter is properly sandboxed and isolated with minimal necessary permissions. Add explicit warnings in the skill documentation about the risks associated with this tool and the need for robust sandboxing. | Static | SKILL.md:100 | |
| MEDIUM | Unstable/Pre-release Dependencies Used The skill package specifies pre-release versions of `agent-framework` and `agent-framework-azure-ai` using the `--pre` flag in `pip install` commands. Pre-release packages are inherently less stable and may contain more bugs or security vulnerabilities that have not yet been discovered or patched, compared to stable releases. This increases the supply chain risk for the skill. Consider using stable, released versions of the `agent-framework` packages if available. If pre-release versions are necessary, pin to specific versions (e.g., `agent-framework==0.1.0b1`) to ensure determinism and monitor for security advisories related to those specific versions. | Static | SKILL.md:22 | |
| MEDIUM | Potential Data Exfiltration via File Search Tool The skill includes `HostedFileSearchTool`, which allows the agent to 'Search vector stores'. If these vector stores contain sensitive data (e.g., PII, confidential business information) and the agent can be prompted to search for and then output the results, it could lead to data exfiltration. The risk depends on the content of the vector stores and the agent's ability to be prompted to retrieve and disclose specific information. Ensure that vector stores accessible by the agent do not contain highly sensitive or PII data unless absolutely necessary and with appropriate access controls. Implement data masking or redaction for sensitive information if it must be stored. Monitor agent outputs for sensitive information disclosure and implement guardrails to prevent such disclosures. | Static | SKILL.md:99 |
Scan History
Embed Code
[](https://skillshield.io/report/709b4c5811cf832a)
Powered by SkillShield