Security Audit
azure-ai-agents-persistent-dotnet
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
azure-ai-agents-persistent-dotnet received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Skill exposes powerful agent creation tools with excessive permissions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Skill exposes powerful agent creation tools with excessive permissions The skill package, through its documentation (`SKILL.md`), indicates that it provides access to the `Azure.AI.Agents.Persistent` SDK. This SDK allows for the creation of AI agents with highly capable tools such as `CodeInterpreterToolDefinition`, `FileSearchToolDefinition`, `FunctionToolDefinition`, `OpenApiToolDefinition`, `AzureFunctionToolDefinition`, `SharepointToolDefinition`, and `MicrosoftFabricToolDefinition`. If the skill's implementation allows an LLM to dynamically configure agents with these tools based on untrusted input, it introduces significant security risks. Specifically:
- `CodeInterpreterToolDefinition` (example at line 40) and `FunctionToolDefinition` (example at line 70, with its `ExecuteFunction` placeholder) could lead to arbitrary code execution or command injection if not securely sandboxed.
- `FileSearchToolDefinition` (example at line 89), `OpenApiToolDefinition`, `AzureFunctionToolDefinition`, `SharepointToolDefinition`, and `MicrosoftFabricToolDefinition` could grant agents broad access to local files, external APIs, or enterprise data stores, potentially leading to data exfiltration or unauthorized actions if not properly scoped and controlled.
These capabilities, if not strictly controlled and sandboxed, represent excessive permissions for an AI agent and the skill that enables their creation. Implement strict validation and sanitization of LLM-provided input when configuring agent tools. Ensure that agents created via the skill operate within a secure, sandboxed environment with minimal necessary permissions. Carefully review the implementation of `FunctionToolDefinition` to prevent arbitrary code execution. Restrict access to sensitive resources for tools like `FileSearchToolDefinition` and enterprise connectors. Consider implementing a whitelist of allowed tools and configurations that can be enabled by the LLM. | LLM | SKILL.md:100 |
Scan History
Embed Code
[](https://skillshield.io/report/aee9a5cc78a96926)
Powered by SkillShield