Trust Assessment
wandb received a trust score of 90/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via unsanitized CLI arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via unsanitized CLI arguments The skill's `SKILL.md` describes command-line invocations of Python scripts where user-provided arguments (e.g., `ENTITY/PROJECT/RUN_ID`, `ENTITY`, `--projects p1,p2`) are directly passed to `python3`. If the host LLM does not properly sanitize or quote these arguments before executing the shell command, a malicious user could inject shell metacharacters (e.g., `;`, `&&`, `|`, `$(...)`) to execute arbitrary commands. While the Python scripts themselves use `argparse` and `wandb` API calls which are safe, the initial shell invocation described in the skill's documentation is vulnerable if not handled correctly by the LLM's execution environment. The host LLM's execution environment must ensure that all user-provided arguments passed to shell commands are properly quoted or escaped to prevent shell metacharacter injection. For example, by wrapping arguments in single quotes: `'ENTITY/PROJECT/RUN_ID'`. This applies to all commands described in `SKILL.md` that take user input as arguments, including those on lines 29 and 43. | LLM | SKILL.md:18 |
Scan History
Embed Code
[](https://skillshield.io/report/23b5e3adeac1a42f)
Powered by SkillShield