Trust Assessment
onboarding-cro received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Skill attempts to redefine LLM persona and goal.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Skill attempts to redefine LLM persona and goal The skill's content, within the untrusted input, attempts to instruct the host LLM to adopt a specific persona ('You are an expert...') and goal ('Your goal is to help users...'). This is a direct manipulation of the LLM's operational instructions, which constitutes a prompt injection. While this skill is a rubric, this instruction is a direct command to the LLM, not an example or quoted string. Remove or rephrase the instruction 'You are an expert...' and 'Your goal is to help users...' from the untrusted skill content. The LLM's persona and goal should be defined by the system prompt, not by the skill itself. If the skill needs to convey this information, it should be phrased as a description of the topic or domain rather than a direct instruction to the LLM. | LLM | SKILL.md:7 |
Scan History
Embed Code
[](https://skillshield.io/report/0ba652a7d89d8679)
Powered by SkillShield