Trust Assessment
agent-tools received a trust score of 23/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 2 critical, 1 high, 1 medium, and 0 low severity. Key findings include Arbitrary command execution, Remote code execution: curl/wget pipe to shell, Potential data exfiltration via infsh --input argument.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Remote code download piped to interpreter Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/okaris/agent-tools/SKILL.md:11 | |
| CRITICAL | Remote code execution: curl/wget pipe to shell Detected a pattern that downloads and immediately executes remote code. This is a primary malware delivery vector. Never pipe curl/wget output directly to a shell interpreter. | Static | skills/okaris/agent-tools/SKILL.md:11 | |
| HIGH | Potential data exfiltration via infsh --input argument The skill declares `Bash(infsh *)` as an allowed tool, enabling the LLM to execute `infsh` commands. The `SKILL.md` provides examples demonstrating the use of `infsh app run <app> --input input.json`, where `input.json` represents a file path. If the LLM constructs an `infsh` command where the `--input` argument's file path is derived from untrusted user input, an attacker could specify a sensitive file (e.g., `/etc/passwd`, `~/.ssh/id_rsa`) to be read by the `infsh` application. The content of this file could then be processed by the AI app and potentially exfiltrated via the app's output. Implement strict validation and sanitization for any user-provided file paths used as arguments to `infsh --input`. Consider restricting file access to a designated sandbox directory or disallowing arbitrary file paths for `--input` if user input is involved. If the skill is only intended to use hardcoded or pre-approved files, ensure the LLM is constrained to those. | LLM | SKILL.md:38 | |
| MEDIUM | Potential command injection via infsh --input argument if LLM misquotes The skill allows execution of `infsh` commands via `Bash(infsh *)`. The `SKILL.md` provides examples such as `infsh app run falai/flux-dev-lora --input '{"prompt": "a cat astronaut"}'`. If the LLM constructs this command using user-controlled input for the `prompt` value, and fails to properly escape or quote shell metacharacters (e.g., by using double quotes instead of single quotes, or not escaping backticks/semicolons within the JSON string), it could lead to arbitrary command execution. While the provided example uses single quotes, which are generally safer, an LLM might deviate from this pattern when generating commands from complex or adversarial user input. Ensure the LLM strictly adheres to single-quoting for JSON string inputs to `infsh` and properly escapes any single quotes within the user-provided JSON content. Alternatively, use a safer method for passing JSON, such as writing to a temporary file and passing the file path, or using a dedicated API if available, rather than embedding directly in a shell command. | LLM | SKILL.md:26 |
Scan History
Embed Code
[](https://skillshield.io/report/2344c511fb53478d)
Powered by SkillShield