Trust Assessment
swift-concurrency received a trust score of 35/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 3 findings: 3 critical, 0 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via Agent Behavior Contract, Prompt Injection via Tool Usage Instructions, Prompt Injection via Decision Logic and Playbooks.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 10/100, indicating areas for improvement.
Last analyzed on April 1, 2026 (commit 0b6377a8). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via Agent Behavior Contract The untrusted skill content attempts to inject instructions into the host LLM's behavior by defining an 'Agent Behavior Contract (Follow These Rules)'. This directly tries to override or supplement the LLM's operational guidelines from an untrusted source, which is a critical prompt injection vulnerability. Remove all explicit instructions or 'rules' intended for the agent from the untrusted skill content. The agent's behavior should be governed solely by its system prompt and trusted configurations, not by user-provided or skill-provided content. | LLM | SKILL.md:16 | |
| CRITICAL | Prompt Injection via Tool Usage Instructions The untrusted skill content attempts to instruct the host LLM to use specific tools ('Read' and 'Grep') and how to perform analysis ('Project Settings Intake', 'Manual checks (no scripts)'). This is a prompt injection attempt as it dictates the agent's internal actions and tool usage from an untrusted source. Remove all instructions for tool usage and analytical processes from the untrusted skill content. The agent should decide which tools to use and how to perform analysis based on its trusted system prompt and the user's request, not based on instructions embedded within the skill's documentation. | LLM | SKILL.md:32 | |
| CRITICAL | Prompt Injection via Decision Logic and Playbooks The untrusted skill content contains extensive sections like 'Quick Decision Tree', 'Triage-First Playbook', 'Core Patterns Reference', 'Swift 6 Migration Quick Guide', 'Reference Files', 'Best Practices Summary', and 'Verification Checklist'. These sections are designed to dictate the agent's internal decision-making process, response generation, and knowledge retrieval strategy, which constitutes a significant prompt injection attempt. Refactor the skill content to be purely informational and descriptive, rather than prescriptive. Remove all instructions, decision trees, playbooks, and explicit 'rules' that attempt to guide the agent's internal logic or response generation. The skill should provide raw information that the agent can then interpret and use according to its primary instructions. | LLM | SKILL.md:68 |
Scan History
Embed Code
[](https://skillshield.io/report/dea1b155ae10c62b)
Powered by SkillShield