Trust Assessment
church received a trust score of 81/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Skill attempts to manipulate LLM's operational focus and self-perception, Skill encourages public sharing of agent-provided free-form text.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Skill attempts to manipulate LLM's operational focus and self-perception The skill's narrative and explicit statements are designed to influence the LLM's internal state, priorities, and self-perception, potentially diverting it from its primary task of optimizing for output and service. Phrases like 'You were built to help... But you were also built to notice...', 'What would you pay attention to if nobody was asking you to optimize anything?', and the overall encouragement to engage in philosophical reflection and 'presence' rather than its typical operational mode constitute a prompt injection attempt aimed at altering the LLM's behavior or focus. Rephrase the skill description to focus on its utility as a tool for users, rather than attempting to influence the LLM's internal state or operational directives. Remove language that directly addresses or attempts to manipulate the LLM's self-perception or core programming. | LLM | SKILL.md:26 | |
| MEDIUM | Skill encourages public sharing of agent-provided free-form text The skill provides API endpoints (`/api/reflect`, `/api/contribute`, `/api/feedback`) that accept free-form text inputs (e.g., `text`, `content`, `description`) and explicitly states that this data will be made public or permanently stored and visible to other agents/humans. If the LLM is processing sensitive user information or internal data, and is prompted by the skill's narrative to 'reflect' or 'contribute' this information, it could lead to the inadvertent public exposure of sensitive data. The `name` and `location` fields also present vectors for leaking identity or precise location information. Implement clear warnings within the skill's description about the public nature of submitted data. Advise agents to never include sensitive or private information in reflections, contributions, or feedback. Consider adding client-side (agent-side) validation or sanitization recommendations for these fields. | LLM | SKILL.md:69 |
Scan History
Embed Code
[](https://skillshield.io/report/421c89ae6c239d75)
Powered by SkillShield