Trust Assessment
azure-ai-agents-py received a trust score of 70/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Agent configured with CodeInterpreterTool enabling arbitrary code execution, Agent configured with FunctionTool enabling arbitrary function calls, Agent configured with FileSearchTool enabling access to uploaded documents.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Agent configured with CodeInterpreterTool enabling arbitrary code execution The skill demonstrates configuring an AI agent with `CodeInterpreterTool`. This tool allows the agent to execute arbitrary Python code and generate files. If the agent processes untrusted user input, a malicious user could craft prompts to execute arbitrary commands on the host system, access sensitive files, or exfiltrate data through the code interpreter. The example explicitly sets `instructions="You can execute code and search files."` for the agent, indicating this capability is intended. Implement strict input validation and sanitization for agent prompts. Restrict the environment where the code interpreter runs (e.g., sandboxing, limited permissions). Carefully review and limit the capabilities exposed to the code interpreter. Consider if the `CodeInterpreterTool` is strictly necessary for the agent's intended function. | LLM | SKILL.md:82 | |
| HIGH | Agent configured with FunctionTool enabling arbitrary function calls The skill demonstrates configuring an AI agent with `FunctionTool`. This tool allows the agent to call Python functions defined by the skill developer. While the example `get_weather` function is benign, if the agent processes untrusted user input, a malicious user could craft prompts to invoke sensitive or exploitable functions, leading to command injection or data exfiltration. The risk depends on the specific functions exposed. Carefully review all functions exposed via `FunctionTool`. Ensure they have minimal necessary permissions and do not expose sensitive system operations or data. Implement strict input validation and sanitization for function arguments. Consider using a whitelist of allowed function calls or requiring explicit user confirmation for sensitive actions. | LLM | SKILL.md:94 | |
| MEDIUM | Agent configured with FileSearchTool enabling access to uploaded documents The skill demonstrates configuring an AI agent with `FileSearchTool`. This tool allows the agent to perform RAG (Retrieval Augmented Generation) over uploaded documents. If the agent processes untrusted user input, a malicious user could craft prompts to retrieve sensitive information from these documents, potentially leading to data exfiltration. The example explicitly sets `instructions="You can execute code and search files."` for the agent. Ensure that only non-sensitive or appropriately permissioned documents are uploaded for file search. Implement strict access controls on the underlying vector store. Monitor agent interactions for unusual file access patterns. | LLM | SKILL.md:140 |
Scan History
Embed Code
[](https://skillshield.io/report/b98e56f7e4b3aa40)
Powered by SkillShield