Trust Assessment
azure-ai-projects-py received a trust score of 83/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via Code Interpreter Tool, Excessive Permissions via File Search Tool.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via Code Interpreter Tool The skill demonstrates the use of `CodeInterpreterTool`, which allows an AI agent to 'Execute Python, generate files'. If an agent configured with this tool processes untrusted user input, a malicious user could inject arbitrary Python code, leading to command injection, system compromise, or data manipulation within the agent's execution environment. This is a significant risk if the agent's sandbox is not robust or if it has elevated privileges. When enabling `CodeInterpreterTool` for an AI agent, ensure that the agent operates within a strictly sandboxed environment with minimal necessary permissions. Implement robust input validation and sanitization for any user-provided prompts that could be passed to the code interpreter. Consider if the agent truly requires arbitrary code execution capabilities for its intended function. | LLM | SKILL.md:80 | |
| MEDIUM | Excessive Permissions via File Search Tool The skill demonstrates the use of `FileSearchTool`, described as enabling 'RAG over uploaded documents'. While useful for retrieval-augmented generation, if this tool is not properly scoped or restricted, it could allow an AI agent to access, read, or potentially exfiltrate sensitive files from the agent's environment or connected storage. This poses a data exfiltration risk if a malicious prompt can manipulate the agent into searching for or retrieving unauthorized content. When using `FileSearchTool`, ensure that its access is strictly limited to only the necessary directories or document stores. Implement access control mechanisms to prevent the agent from accessing sensitive files or paths outside its intended scope. Regularly audit the files and data accessible by agents configured with this tool. | LLM | SKILL.md:80 |
Scan History
Embed Code
[](https://skillshield.io/report/94c3d93b81c11838)
Powered by SkillShield