Trust Assessment
n8n-workflow received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 2 high, 2 medium, and 0 low severity. Key findings include Potential Command Injection via n8n 'Code' Nodes, Data Exfiltration and Unauthorized File Access via n8n Workflows, Potential Prompt Injection in n8n LLM Integration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 56/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via n8n 'Code' Nodes The skill integrates with n8n, which supports a 'Code' node type allowing arbitrary code execution (e.g., JavaScript, Python). The skill's manifest also declares `code_execution` as a capability. If the AI agent constructs n8n workflows based on untrusted user input, and allows the inclusion or manipulation of 'Code' nodes, an attacker could inject malicious code to be executed within the n8n environment. Implement strict sanitization and validation of user input used to generate or modify n8n workflows. Restrict the types of n8n nodes that can be generated or configured by user input, specifically disallowing or heavily sandboxing 'Code' nodes. Ensure the n8n environment itself runs with least privilege. | LLM | SKILL.md:40 | |
| HIGH | Data Exfiltration and Unauthorized File Access via n8n Workflows The skill's manifest declares `file_operations` and `computer` tools, and the n8n workflow examples demonstrate capabilities like `localFileTrigger`, `writeFile`, and integration with external services (`slack`, `email`). If the AI agent constructs n8n workflows based on untrusted user input, an attacker could specify arbitrary file paths for reading/writing or configure external service integrations to exfiltrate sensitive data from the host system or internal networks. Implement strict validation and allow-listing for file paths and network endpoints that can be configured by user input within n8n workflows. Restrict the scope of `file_operations` to designated, sandboxed directories. Ensure all external integrations require explicit user confirmation or are pre-configured with limited permissions. | LLM | SKILL.md:50 | |
| MEDIUM | Potential Prompt Injection in n8n LLM Integration The example n8n workflow includes an `anthropic` node with a `prompt` parameter. If the AI agent allows untrusted user input to directly influence or construct the content of such prompts within generated n8n workflows, an attacker could inject malicious instructions to manipulate the behavior of the downstream LLM (e.g., for data extraction, generating harmful content, or bypassing safety filters). Sanitize and validate all user input before incorporating it into LLM prompts within n8n workflows. Implement robust prompt engineering techniques, such as using system prompts, few-shot examples, and input/output parsing to constrain LLM behavior. Consider using LLM safety filters. | LLM | SKILL.md:60 | |
| MEDIUM | Unpinned Python Dependencies in Installation Instructions The installation instructions specify Python packages (`python-docx`, `openpyxl`, `python-pptx`, `reportlab`, `jinja2`) without pinning them to specific versions. This introduces a supply chain risk, as a malicious update to any of these packages or their transitive dependencies could be automatically installed, leading to compromise. Pin all Python dependencies to exact versions (e.g., `package==1.2.3`) to ensure deterministic builds and prevent unexpected or malicious updates. Use a `requirements.txt` file with hashed dependencies for stronger integrity checks. | LLM | SKILL.md:95 |
Scan History
Embed Code
[](https://skillshield.io/report/74295623242cd27c)
Powered by SkillShield