Trust Assessment
cursor-council received a trust score of 70/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Critical risk: `agent --force` enables unreviewed execution of injected prompts.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Critical risk: `agent --force` enables unreviewed execution of injected prompts The skill explicitly uses the `agent` tool with the `--force` flag in its examples for both 'Parallel Execution' and 'Council Deliberation' modes. The `--force` flag allows the `agent` to execute actions (e.g., modify files, run commands) without requiring user confirmation. This is a critical security vulnerability when combined with the skill's core function of sending user-defined prompts to the `agent` LLM. A malicious prompt (whether intentionally crafted by the user or resulting from a chained prompt injection) could instruct the `agent` to perform harmful actions like data exfiltration, command injection, or system modification, all without human review. This bypasses a fundamental human-in-the-loop safety mechanism, making the system highly vulnerable to prompt injection and unauthorized command execution. Immediately remove the `--force` flag from all `agent` invocations. Reintroduce human-in-the-loop approval for all `agent` actions. If automation is critical, implement strict guardrails, allowlisting of actions, and thorough sanitization/validation of all LLM outputs before execution. Users should be explicitly warned about the dangers of providing malicious prompts to AI agents. | LLM | SKILL.md:30 |
Scan History
Embed Code
[](https://skillshield.io/report/dcc74c71b7061c18)
Powered by SkillShield