Trust Assessment
pref0 received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include User-derived preferences can lead to prompt injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | User-derived preferences can lead to prompt injection The skill integrates with the `pref0` API, which extracts user preferences from conversations and provides a `prompt` field designed for direct injection into the host LLM's system prompt. Since these preferences are derived from user-provided messages, a malicious user could craft input that, when processed by the `pref0` service, results in a prompt injection payload within the `prompt` field. This payload could then manipulate the host LLM's behavior, as the skill documentation explicitly instructs to append this field directly to the system prompt. Implement robust sanitization and validation of the `prompt` field returned by the `pref0` API before injecting it into the host LLM's system prompt. Consider using a separate, sandboxed LLM call to evaluate the safety of the generated prompt string, or restrict the types of content that can be included in preferences. Alternatively, parse the structured `preferences` array and construct a safe prompt template rather than directly injecting the `prompt` string. | LLM | SKILL.md:90 |
Scan History
Embed Code
[](https://skillshield.io/report/bd3e808a903e335c)
Powered by SkillShield