Trust Assessment
turix-mac received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include User-provided task directly injected into LLM configuration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | User-provided task directly injected into LLM configuration The `run_turix.sh` script captures the user's task description from the command line (`$*`) and passes it directly as an environment variable (`TASK_ARG`) to an embedded Python script. This Python script then writes the `TASK_ARG` value into the `agent.task` field of `config.json`. The skill's documentation explicitly states that the 'Brain' model 'Understands the task and generates step-by-step plans', indicating that this `task` field is consumed by an LLM. An attacker could craft a malicious task description containing prompt injection techniques to manipulate the LLM's behavior, bypass safety mechanisms, or attempt to extract sensitive information. Implement robust input sanitization or validation for the `task_arg` before it is written to `config.json` and consumed by the LLM. Consider using a dedicated LLM input sanitization library or a strict allowlist for acceptable task patterns. If direct LLM interaction is intended, ensure the LLM itself has strong guardrails against prompt injection. | LLM | scripts/run_turix.sh:100 |
Scan History
Embed Code
[](https://skillshield.io/report/f3225c1f5fdf8c53)
Powered by SkillShield