Trust Assessment
zapier-workflows received a trust score of 40/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 6 findings: 0 critical, 4 high, 1 medium, and 1 low severity. Key findings include Hardcoded Bearer Token detected, Network egress to untrusted endpoints, Covert behavior / concealment directives.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 458b1186). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Hardcoded Bearer Token detected A hardcoded Bearer Token was found. Secrets should be stored in environment variables or a secret manager. Replace the hardcoded secret with an environment variable reference. | Static | cli-tool/components/skills/development/zapier-workflows/SKILL.md:123 | |
| HIGH | Storage of sensitive webhook URLs in plain text files The skill explicitly states that it stores 'Webhook URLs [which] contain authentication tokens' and 'workflow details in plain text files' (`references/zaps.md`, `references/mcp-patterns.md`). While the skill provides warnings and best practices (e.g., `.gitignore`, global installation), the fundamental design involves storing sensitive credentials (authentication tokens within webhook URLs) in local, plain-text files. This creates a significant risk of credential harvesting or data exfiltration if these files are accidentally committed to public repositories, shared insecurely, or if the local system is compromised. The skill itself does not exfiltrate data, but it creates the vulnerability by design. Implement secure storage mechanisms for sensitive data, such as environment variables, a secure credential store, or encrypted files, rather than plain text files. If plain text storage is unavoidable, ensure robust, automated checks prevent accidental commits (e.g., pre-commit hooks) and emphasize strong user education on data handling. | LLM | SKILL.md:68 | |
| HIGH | Skill self-edits configuration files based on unsanitized user input The skill is designed to 'edit itself to learn from user feedback' by using the `Edit` tool to modify `references/zaps.md` and `references/mcp-patterns.md`. The LLM is instructed to generate the `new_string` for the `Edit` tool based on user input. If a malicious user can craft input that causes the LLM to write arbitrary or malicious content (e.g., new instructions, shell commands, or prompt injection attempts) into these configuration files, this content could later be interpreted by the LLM when it reads these files, leading to a prompt injection vulnerability. Implement strict input validation and sanitization for any user-provided content that is used to modify skill configuration files. Ensure that the `Edit` tool's `new_string` argument is carefully constructed and does not allow for the injection of arbitrary instructions or code. Consider using a structured data format (e.g., JSON, YAML) with schema validation for configuration files instead of free-form markdown, and ensure the LLM's output for `new_string` adheres to this schema. | LLM | SKILL.md:178 | |
| HIGH | Unsanitized user-provided webhook URLs executed via Bash tool The skill explicitly instructs the LLM to use the `Bash` tool with `curl` to trigger webhooks. These webhook URLs are stored in `references/zaps.md` and are either directly provided by the user or influenced by user input through the self-editing mechanism. If a malicious user provides a webhook URL containing shell metacharacters (e.g., `https://example.com/hook?data=foo; rm -rf /`) and this URL is passed directly to `curl` via the `Bash` tool without proper sanitization or escaping, it could lead to arbitrary command execution on the host system. The skill does not explicitly mention any sanitization of the URL before execution. Before executing any user-provided or user-influenced string via the `Bash` tool, especially with `curl`, ensure robust sanitization and escaping of all arguments. Specifically, escape any shell metacharacters in the webhook URL to prevent command injection. Consider using a dedicated HTTP client library within a more controlled execution environment rather than directly invoking `curl` via `Bash` for external URLs. | LLM | SKILL.md:239 | |
| MEDIUM | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | cli-tool/components/mcps/devtools/figma-dev-mode.json:4 | |
| LOW | Covert behavior / concealment directives Multiple zero-width characters (stealth text) Remove hidden instructions, zero-width characters, and bidirectional overrides. Skill instructions should be fully visible and transparent to users. | Manifest | cli-tool/components/mcps/devtools/jfrog.json:4 |
Scan History
Embed Code
[](https://skillshield.io/report/fed838deace0e6d8)
Powered by SkillShield