Trust Assessment
sui-knowledge received a trust score of 62/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 1 medium, and 1 low severity. Key findings include Node lockfile missing, Potential Command Injection via `rg` with user input, Execution of untrusted `setup.sh` script.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via `rg` with user input The skill's `SKILL.md` instructs the host LLM to execute `rg` (ripgrep) commands, directly interpolating user-provided 'keyword' or 'question keywords' into the shell command. If the LLM executes this command as written, a malicious user could inject arbitrary shell commands by crafting the input (e.g., `'; rm -rf /'` or `'; cat /etc/passwd'`), leading to arbitrary code execution or data exfiltration. The LLM should never directly execute shell commands constructed with untrusted user input. Instead, it should use a safe API for searching or sanitize/validate user input rigorously before passing it to any shell command. Ideally, the `rg` command should be wrapped in a secure execution environment that prevents arbitrary command chaining. | LLM | SKILL.md:40 | |
| HIGH | Execution of untrusted `setup.sh` script The `SKILL.md` explicitly instructs the host LLM to execute `chmod +x setup.sh && ./setup.sh`. This means the LLM is directed to execute an arbitrary shell script (`setup.sh`) provided within the untrusted skill package. While the `setup.sh` script itself does not appear to take direct user input, the act of executing an arbitrary script from an untrusted source is a significant security risk, as the script could perform any action allowed by the LLM's execution environment, including downloading malicious content, modifying files, or exfiltrating data. LLMs should not be instructed to execute arbitrary shell scripts from untrusted skill definitions. If setup actions are required, they should be performed through a sandboxed, declarative, or strictly controlled mechanism provided by the LLM platform, rather than direct shell execution. | LLM | SKILL.md:35 | |
| MEDIUM | Unpinned Git repository clone/pull The `setup.sh` script clones and updates the `MystenLabs/sui` GitHub repository without specifying a fixed commit hash or tag. It uses `git clone --depth 1` and `git pull` on the `main` branch. This introduces a supply chain risk because if the upstream `main` branch were compromised, malicious content could be pulled into the skill's environment during setup or subsequent updates, potentially leading to code execution or data manipulation. Pin the `git clone` and `git pull` operations to a specific, immutable commit hash or tag. This ensures that the exact version of the documentation is retrieved, preventing unexpected or malicious changes from being introduced through upstream updates. Regularly review and update the pinned version. | LLM | setup.sh:14 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/easonc13/sui/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/e98699be49d60e87)
Powered by SkillShield