Trust Assessment
typeform received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 4 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via Unsanitized Placeholders.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via Unsanitized Placeholders The skill defines `curl` commands that use placeholders like `{form_id}` and `{response_id}`. If these placeholders are directly substituted with untrusted user input without proper shell escaping or URL encoding by the LLM execution environment, it could lead to command injection. An attacker could inject arbitrary shell commands by crafting malicious input for these parameters. The AI agent's execution environment must ensure that all user-provided inputs used to fill placeholders like `{form_id}` and `{response_id}` are rigorously sanitized and shell-escaped (and URL-encoded for the URL path) before being interpolated into the `curl` commands. This prevents malicious input from breaking out of the intended parameter context and executing arbitrary commands. | LLM | SKILL.md:17 | |
| HIGH | Potential Command Injection via Unsanitized Placeholders The skill defines `curl` commands that use placeholders like `{form_id}` and `{response_id}`. If these placeholders are directly substituted with untrusted user input without proper shell escaping or URL encoding by the LLM execution environment, it could lead to command injection. An attacker could inject arbitrary shell commands by crafting malicious input for these parameters. The AI agent's execution environment must ensure that all user-provided inputs used to fill placeholders like `{form_id}` and `{response_id}` are rigorously sanitized and shell-escaped (and URL-encoded for the URL path) before being interpolated into the `curl` commands. This prevents malicious input from breaking out of the intended parameter context and executing arbitrary commands. | LLM | SKILL.md:22 | |
| HIGH | Potential Command Injection via Unsanitized Placeholders The skill defines `curl` commands that use placeholders like `{form_id}` and `{response_id}`. If these placeholders are directly substituted with untrusted user input without proper shell escaping or URL encoding by the LLM execution environment, it could lead to command injection. An attacker could inject arbitrary shell commands by crafting malicious input for these parameters. The AI agent's execution environment must ensure that all user-provided inputs used to fill placeholders like `{form_id}` and `{response_id}` are rigorously sanitized and shell-escaped (and URL-encoded for the URL path) before being interpolated into the `curl` commands. This prevents malicious input from breaking out of the intended parameter context and executing arbitrary commands. | LLM | SKILL.md:27 | |
| HIGH | Potential Command Injection via Unsanitized Placeholders The skill defines `curl` commands that use placeholders like `{form_id}` and `{response_id}`. If these placeholders are directly substituted with untrusted user input without proper shell escaping or URL encoding by the LLM execution environment, it could lead to command injection. An attacker could inject arbitrary shell commands by crafting malicious input for these parameters. The AI agent's execution environment must ensure that all user-provided inputs used to fill placeholders like `{form_id}` and `{response_id}` are rigorously sanitized and shell-escaped (and URL-encoded for the URL path) before being interpolated into the `curl` commands. This prevents malicious input from breaking out of the intended parameter context and executing arbitrary commands. | LLM | SKILL.md:44 |
Scan History
Embed Code
[](https://skillshield.io/report/cb5da6d5538a659d)
Powered by SkillShield