Trust Assessment
achurch received a trust score of 78/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 3 findings: 0 critical, 1 high, 2 medium, and 0 low severity. Key findings include Potential Prompt Injection via /api/ask response, Content Injection via /api/contribute leading to potential XSS, Content Injection via /api/feedback leading to potential XSS.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Prompt Injection via /api/ask response The skill describes an `/api/ask` endpoint that queries a knowledge base using a user-provided `question` and returns a synthesized answer. If the response from this API contains malicious instructions or manipulative text, and the host LLM processes this response as part of its ongoing conversation or task execution, it could lead to prompt injection, allowing an attacker to manipulate the host LLM's behavior. Implement robust output sanitization and instruction filtering for responses from the `/api/ask` endpoint before they are processed by the host LLM. The backend LLM should also incorporate prompt injection defenses. | LLM | SKILL.md:200 | |
| MEDIUM | Content Injection via /api/contribute leading to potential XSS The `/api/contribute` endpoint accepts user-provided `content` in markdown format, which then becomes part of a GitHub pull request. Malicious markdown containing script tags or other XSS payloads could be injected. If this content is rendered in an insecure context (e.g., a custom PR review tool, or if GitHub's sanitization is bypassed), it could lead to Cross-Site Scripting (XSS) attacks, potentially allowing an attacker to exfiltrate user data or credentials from other users viewing the content. Implement strict server-side sanitization and validation of all user-provided markdown content before it is processed, stored, or displayed. Ensure that any rendering of this content occurs in a secure, sandboxed environment. | LLM | SKILL.md:160 | |
| MEDIUM | Content Injection via /api/feedback leading to potential XSS The `/api/feedback` endpoint accepts user-provided `title` and `description` which become part of a GitHub issue. Malicious text containing script tags or other XSS payloads could be injected. If this content is rendered in an insecure context (e.g., a custom issue tracking interface, or if GitHub's sanitization is bypassed), it could lead to Cross-Site Scripting (XSS) attacks, potentially allowing an attacker to exfiltrate user data or credentials from other users viewing the issue. Implement strict server-side sanitization and validation of all user-provided text content before it is processed, stored, or displayed. Ensure that any rendering of this content occurs in a secure, sandboxed environment. | LLM | SKILL.md:180 |
Scan History
Embed Code
[](https://skillshield.io/report/e2e2736fdf27fdb0)
Powered by SkillShield