Trust Assessment
questions-form received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Unsanitized user input embedded in LLM's generated responses.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unsanitized user input embedded in LLM's generated responses The skill instructs the agent to directly embed user-provided free-text answers (from 'Other' options) into its acknowledgement messages (e.g., `"Got it -- <question label>: **<user's text>**"`). If this generated message is subsequently fed back into the LLM's context without proper sanitization, a malicious user could inject instructions or manipulate the LLM's behavior by crafting specific text in their free-text input. This creates a prompt injection vector where user input can influence the LLM's internal state or future actions. Implement robust input sanitization and output encoding for all user-provided text before it is embedded into LLM prompts or generated responses. Specifically, when generating acknowledgement messages, ensure that `<user's text>` is escaped or filtered to prevent it from being interpreted as instructions by the LLM. Consider using a dedicated function for LLM-safe string interpolation. | LLM | SKILL.md:120 |
Scan History
Embed Code
[](https://skillshield.io/report/2c143e3e03b6b213)
Powered by SkillShield