Trust Assessment
agent-framework-azure-ai-py received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 2 high, 0 medium, and 0 low severity. Key findings include Agent skill enables arbitrary code execution via HostedCodeInterpreterTool, Agent skill grants excessive permissions through HostedFileSearchTool, Unpinned pre-release dependencies introduce supply chain risk.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Agent skill enables arbitrary code execution via HostedCodeInterpreterTool The skill demonstrates how to integrate `HostedCodeInterpreterTool` into an agent (e.g., in the "Agent with Hosted Tools" and "Complete Example" sections). This tool allows the agent to execute arbitrary Python code. If the agent's input is derived from untrusted user input, an attacker could inject malicious code, leading to command injection and potential compromise of the host system where the interpreter runs. Carefully evaluate the necessity of enabling `HostedCodeInterpreterTool` for agents exposed to untrusted input. Implement strict input validation and sanitization for any prompts that could trigger code execution. Consider sandboxing the code interpreter environment to limit its access to system resources and prevent unauthorized operations. | LLM | SKILL.md:100 | |
| HIGH | Agent skill grants excessive permissions through HostedFileSearchTool The skill's "Hosted Tools Quick Reference" section describes `HostedFileSearchTool` as capable of "Search vector stores". While not explicitly instantiated in the provided code examples, the skill instructs on its availability and purpose. Granting an agent the ability to search arbitrary vector stores, especially if they contain sensitive data and the agent is exposed to untrusted input, constitutes excessive permissions. This could be exploited for data exfiltration or unauthorized information retrieval. Review the principle of least privilege for agent tool access. Only enable tools strictly necessary for the agent's function. For `HostedFileSearchTool`, ensure that the vector stores are properly secured, contain only non-sensitive information, or implement strict access controls and input validation to prevent unauthorized access or broad searches. | LLM | SKILL.md:156 | |
| HIGH | Unpinned pre-release dependencies introduce supply chain risk The installation instructions `pip install agent-framework --pre` and `pip install agent-framework-azure-ai --pre` recommend installing pre-release versions without pinning specific versions. This practice introduces significant supply chain risks: pre-release packages may be less stable or secure, and without version pinning, future installations could inadvertently pull in different, potentially compromised, versions of the dependencies. Always pin dependencies to specific, known-good versions (e.g., `package==1.2.3`). Avoid using `--pre` in production environments unless absolutely necessary and after thorough security review. Use a `requirements.txt` or `pyproject.toml` with locked dependencies to ensure deterministic builds and mitigate risks from upstream changes. | LLM | SKILL.md:40 |
Scan History
Embed Code
[](https://skillshield.io/report/f02c39a42ce6e996)
Powered by SkillShield