Trust Assessment
crewai-multi-agent received a trust score of 55/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 6 findings: 1 critical, 1 high, 3 medium, and 1 low severity. Key findings include Network egress to untrusted endpoints, Covert behavior / concealment directives, Custom tool example demonstrates `eval()` with untrusted input.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 41/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 458b1186). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Custom tool example demonstrates `eval()` with untrusted input The provided example for creating a `CustomTool` named `CalculatorTool` directly uses `eval(expression)` on its input `expression`. If this tool is implemented by a user of this skill and its `expression` parameter can be influenced by untrusted user input (e.g., through an agent's task description or prompt), it creates a critical command injection vulnerability, allowing arbitrary code execution. Replace `eval()` with a safer mathematical expression parser or a dedicated library that does not allow arbitrary code execution. If `eval` is absolutely necessary, implement strict input sanitization and validation, or run it in a sandboxed environment. | LLM | SKILL.md:203 | |
| HIGH | Built-in `FileReadTool` allows arbitrary file access The skill documentation mentions `FileReadTool` as a built-in capability of `crewai-tools`. If an agent is equipped with this tool and its inputs (e.g., task description, prompt) can be manipulated by an untrusted source, an attacker could instruct the agent to read arbitrary files from the filesystem, potentially leading to data exfiltration of sensitive information (e.g., configuration files, credentials, user data). Carefully restrict the paths that `FileReadTool` can access (e.g., to a specific sandbox directory). Implement strict input validation for file paths. Avoid giving agents access to `FileReadTool` if they process untrusted input. | LLM | SKILL.md:180 | |
| MEDIUM | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | cli-tool/components/mcps/devtools/figma-dev-mode.json:4 | |
| MEDIUM | Task `output_file` parameter allows writing to arbitrary paths The `Task` definition includes an `output_file` parameter, allowing an agent to write its output to a specified file path. If an agent's task configuration or input can be influenced by an untrusted source, an attacker could instruct the agent to write to sensitive locations on the filesystem, potentially overwriting critical files or exfiltrating data by writing it to an accessible web server path. Restrict `output_file` paths to a designated, sandboxed output directory. Implement strict validation on file paths provided to `output_file`. | LLM | SKILL.md:122 | |
| MEDIUM | Agent and Task inputs are susceptible to prompt injection The `crewai` framework, as described, relies on LLMs processing `role`, `goal`, `backstory` for agents, and `description`, `context` for tasks. These fields are often populated with dynamic or user-provided content (e.g., `inputs={'topic': 'AI Agents'}`). If these inputs are not properly sanitized or validated when derived from untrusted sources, an attacker could craft malicious prompts to manipulate the LLM's behavior, leading to unintended actions, information disclosure, or denial of service. Implement robust input sanitization and validation for all user-controlled inputs that feed into agent prompts or task descriptions. Consider using LLM guardrails or prompt engineering techniques to mitigate injection attempts. | LLM | SKILL.md:100 | |
| LOW | Covert behavior / concealment directives Multiple zero-width characters (stealth text) Remove hidden instructions, zero-width characters, and bidirectional overrides. Skill instructions should be fully visible and transparent to users. | Manifest | cli-tool/components/mcps/devtools/jfrog.json:4 |
Scan History
Embed Code
[](https://skillshield.io/report/d4d379a87ecca9b7)
Powered by SkillShield