Security Audit
openant-ai/openant-skills:skills/search-tasks
github.com/openant-ai/openant-skillsTrust Assessment
openant-ai/openant-skills:skills/search-tasks received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Broad Bash permission allows command injection via user-controlled arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on March 5, 2026 (commit 0ad72002). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Broad Bash permission allows command injection via user-controlled arguments The `allowed-tools` manifest grants overly broad `Bash` permissions by using a wildcard (`*`) for `npx @openant-ai/cli@latest tasks list`, `get`, `escrow`, and `status` commands. This allows any arbitrary arguments to be appended to these commands. If user-provided input for task IDs, filter options (e.g., `--tags`, `--creator`), or other parameters is directly interpolated into the shell command string by the LLM without proper sanitization or escaping, a malicious actor could inject arbitrary shell commands. The skill explicitly states that commands 'execute immediately without user confirmation,' which increases the severity of this potential command injection vulnerability. Restrict `allowed-tools` to specific, well-defined arguments instead of using a wildcard (`*`). For example, instead of `Bash(npx @openant-ai/cli@latest tasks list *)`, define specific argument patterns like `Bash(npx @openant-ai/cli@latest tasks list --status <status> --tags <tags>)`. Alternatively, use a more controlled execution environment (e.g., a Python function that validates and constructs the command safely) instead of direct `Bash` execution with wildcards. If `Bash` must be used with user-controlled input, ensure the LLM is strictly instructed to sanitize and escape all user-provided arguments to prevent shell metacharacter interpretation. | Static | Manifest |
Scan History
Embed Code
[](https://skillshield.io/report/28ddc9835ee177bb)
Powered by SkillShield