Trust Assessment
github-issue-creator received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Skill definition contains direct instructions for host LLM, Skill instructs LLM to infer and include sensitive context from memory, Skill requests filesystem write access to 'repo root'.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Skill definition contains direct instructions for host LLM The entire skill content, marked as untrusted input, consists of detailed instructions and guidelines for the host LLM on how to process user input and generate structured output. This directly attempts to manipulate the LLM's behavior and output format, which is a form of prompt injection when originating from an untrusted source. The 'Guidelines' section explicitly dictates LLM behavior. Review the skill's design. If the skill's instructions are intended to guide the LLM, they should be part of the trusted system prompt or a controlled tool definition, not embedded as untrusted user-provided content. Ensure the LLM's core instructions are robust against manipulation by skill content. | LLM | SKILL.md:44 | |
| HIGH | Skill requests filesystem write access to 'repo root' The skill explicitly instructs the LLM to 'Create issues as markdown files in `/issues/` directory at the repo root.' This implies the skill requires write access to the filesystem. The use of 'repo root' as a target directory, especially if not strictly sandboxed to the skill's own directory, could allow writing to arbitrary or sensitive locations, potentially leading to data exfiltration (writing sensitive data to an accessible location) or denial of service (filling up disk space). Restrict filesystem write operations to a strictly sandboxed, ephemeral directory specific to the skill's execution. Avoid relative paths like 'repo root' that might resolve to sensitive locations. If writing to a specific, user-controlled output directory is necessary, ensure it's explicitly configured and validated by the host system, not inferred by the skill. | LLM | SKILL.md:40 | |
| MEDIUM | Skill instructs LLM to infer and include sensitive context from memory The guideline 'Infer missing context: If user mentions "same project" or "the dashboard", use context from conversation or memory to fill in specifics' instructs the LLM to retrieve and incorporate information from its internal memory or conversation history. If this memory contains sensitive data (e.g., project names, user IDs, internal system details), this instruction could lead to the unintentional exposure of such data in the generated GitHub issue. Modify the guideline to explicitly state that only non-sensitive, publicly available, or user-approved context should be inferred. Alternatively, implement a mechanism to sanitize or redact sensitive information before it's included in the output, or restrict the LLM's access to sensitive parts of its memory/context. | LLM | SKILL.md:48 |
Scan History
Embed Code
[](https://skillshield.io/report/5bf1f1193b31564b)
Powered by SkillShield