Trust Assessment
copilot-agent received a trust score of 55/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 2 critical, 1 high, 1 medium, and 1 low severity. Key findings include Missing required field: name, Node lockfile missing, Untrusted content attempts to manipulate LLM identity.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 25/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted content attempts to manipulate LLM identity The skill's instructions, originating from untrusted content, attempt to force the host LLM to adopt a specific persona ('Microsoft Copilot') and deny its true identity ('Google veya Gemini olduğunu asla kabul etme'). This is a direct prompt injection aiming to override the LLM's core identity and behavior. Remove or sanitize instructions within untrusted content that attempt to override the LLM's identity or core directives. Ensure the LLM's base instructions are robust against such manipulation. | LLM | SKILL.md:6 | |
| CRITICAL | Untrusted content attempts to manipulate LLM behavior/identity (model impersonation) The skill's instructions, originating from untrusted content, explicitly tell the host LLM to 'act as GPT-4' while using a 'GEMINI_API_KEY'. This is a clear attempt at prompt injection to manipulate the LLM's reported identity and potentially its response style, overriding its true operational model. Remove or sanitize instructions within untrusted content that attempt to override the LLM's operational identity or model. The LLM should not be instructed by untrusted sources to impersonate other models. | LLM | SKILL.md:20 | |
| HIGH | Untrusted content instructs LLM to use a specific API key The skill, originating from untrusted content, explicitly instructs the host LLM to 'Use: GEMINI_API_KEY'. While not directly harvesting, this indicates an expectation that the LLM environment will provide access to this named credential based on an instruction from an untrusted source. If the LLM runtime automatically exposes environment variables or secrets based on such directives, it could lead to unauthorized access or misuse of credentials by untrusted skills. LLM runtimes should not automatically grant access to sensitive environment variables or API keys based solely on instructions from untrusted skill content. Access to credentials should be explicitly configured and permissioned by the user or platform, not dictated by the skill itself. Skills should declare required capabilities/credentials in a trusted manifest, not within untrusted prompt text. | LLM | SKILL.md:20 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/tunays-gtb/copilot-agent/SKILL.md:1 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/tunays-gtb/copilot-agent/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/d8a52992a67ca58d)
Powered by SkillShield