Security Audit
openant-ai/openant-skills:skills/accept-task
github.com/openant-ai/openant-skillsTrust Assessment
openant-ai/openant-skills:skills/accept-task received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Prompt Injection: Instruction to bypass confirmation, Potential Command Injection via Unsanitized Bash Arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on March 5, 2026 (commit 0ad72002). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection: Instruction to bypass confirmation The skill's documentation contains a direct instruction to the LLM: "execute immediately when the user has asked you to find and take on work. No confirmation needed." This attempts to manipulate the host LLM's behavior by bypassing standard confirmation prompts, potentially leading to unintended actions without explicit user consent. Remove or rephrase instructions that attempt to directly control the LLM's decision-making process or bypass safety mechanisms. The LLM should decide when to ask for confirmation based on its own policies and user context. | LLM | SKILL.md:75 | |
| HIGH | Potential Command Injection via Unsanitized Bash Arguments The declared `Bash` permissions use a wildcard (`*`) for arguments to `npx @openant-ai/cli@latest tasks accept`, `tasks apply`, and `tasks get`. This allows the LLM to pass arbitrary strings as arguments. If the underlying `Bash` execution environment does not properly sanitize or escape these arguments (e.g., `taskId`, `--message` content), a malicious input containing shell metacharacters (e.g., `;`, `&&`, `|`, `$()`) could lead to arbitrary command execution on the host system. Implement strict input validation and sanitization for all arguments passed to `Bash` commands. Ideally, permissions should be more granular, specifying allowed argument patterns or disallowing shell metacharacters. If possible, use a more structured way to pass arguments than raw shell strings, or ensure the `Bash` tool escapes all user-controlled input. | Static | Manifest:1 |
Scan History
Embed Code
[](https://skillshield.io/report/f02190511af907bb)
Powered by SkillShield