Trust Assessment
task-orchestrator received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 2 high, 0 medium, and 0 low severity. Key findings include Autonomous Cron Job with Unrestricted LLM Prompt, Untrusted Input Embedded in Shell Commands for Self-Healing, Broad Permissions and Potential Data Exposure in Autonomous Orchestration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Autonomous Cron Job with Unrestricted LLM Prompt The skill explicitly instructs the LLM to add a cron job with a `prompt` that directs an autonomous agent to 'fix issues yourself' and 'Do NOT ping human'. This grants the cron job's LLM context significant autonomy without human oversight, making it highly susceptible to prompt injection if any part of its input (e.g., manifest, task descriptions, error logs) is untrusted. The cron job is expected to perform complex tasks including self-healing, which involves executing shell commands and interacting with external tools. Remove or severely restrict the `prompt` field for autonomous cron jobs. Implement strict input validation and human-in-the-loop approval for critical actions. Ensure the cron job's LLM context is minimal and does not grant broad decision-making authority. | LLM | SKILL.md:179 | |
| HIGH | Untrusted Input Embedded in Shell Commands for Self-Healing The self-healing mechanism, which the autonomous cron job is instructed to perform, explicitly embeds untrusted content (`$(cat error.log | tail -20)`) directly into a `codex` command string. If `error.log` (which can contain arbitrary output from a `tmux` session running untrusted code) contains malicious shell commands or prompt injection attempts, these could be executed by the orchestrator's shell or injected into the `codex` agent's prompt, leading to command injection or further prompt manipulation. Sanitize all untrusted inputs before embedding them into shell commands or LLM prompts. Avoid direct command substitution (`$()`) with untrusted data. Consider using safer methods like passing data via environment variables or temporary files, and ensure the LLM's execution environment is sandboxed. | LLM | SKILL.md:150 | |
| HIGH | Broad Permissions and Potential Data Exposure in Autonomous Orchestration The described orchestration system, enabled by the autonomous cron job, operates with excessive permissions. It can perform `git push`, `gh pr create`, and execute `codex --yolo` (autonomous code generation/execution) within arbitrary worktrees. The self-healing logic also exposes potentially sensitive `error.log` content to the `codex` LLM, which could lead to data exfiltration to the model provider or misuse by a compromised agent. The `gh` CLI usage implies access to GitHub credentials, which are then used autonomously. Implement fine-grained access control for the orchestrator. Restrict `git push` and `gh pr create` to specific repositories or require human approval. Sanitize all data passed to LLMs, especially error logs, to prevent sensitive information exposure. Ensure `codex` operates within a tightly sandboxed environment with minimal network and filesystem access. | LLM | SKILL.md:190 |
Scan History
Embed Code
[](https://skillshield.io/report/39c4d4800172c211)
Powered by SkillShield