Trust Assessment
bambu-cli received a trust score of 67/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Potential data exfiltration via file upload, Broad local filesystem access, Arbitrary G-code execution on printer.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential data exfiltration via file upload The skill provides a `files upload` command that allows uploading arbitrary local files to the BambuLab printer. A malicious prompt could instruct the LLM to upload sensitive files from the local system (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, configuration files) to the printer, effectively exfiltrating them. Implement strict validation and allowlisting for file paths that can be uploaded. Require explicit user confirmation for uploads, especially for paths outside a designated safe directory. The LLM should be trained to refuse to upload sensitive system files. | LLM | SKILL.md:50 | |
| HIGH | Arbitrary G-code execution on printer The `gcode send <line...>` command allows sending arbitrary G-code commands directly to the BambuLab printer. G-code can control all aspects of the printer's operation, including motion, heating, and tool functions. A malicious prompt could instruct the LLM to send harmful G-code, potentially causing physical damage to the printer, creating fire hazards, or misusing the device. Although 'confirmation required' is mentioned, `--no-check` can bypass validation, increasing the risk. Ensure that the LLM strictly enforces the 'confirmation required' policy for `gcode send`. Implement a robust G-code validation and sanitization layer to prevent known dangerous commands, even if `--no-check` is used. Consider a allowlist of safe G-code commands if possible. | LLM | SKILL.md:62 | |
| MEDIUM | Broad local filesystem access The `bambu-cli` skill, through commands like `files upload`, `files download`, and `camera snapshot --out`, can read from and write to arbitrary paths on the local filesystem. While necessary for the tool's intended functionality, this broad access could be abused by a malicious prompt to read sensitive files or overwrite critical system files. Implement sandboxing or restrict the skill's filesystem access to specific, designated directories. The LLM should be trained to scrutinize file paths provided by users and refuse to interact with sensitive system paths. | LLM | SKILL.md:50 |
Scan History
Embed Code
[](https://skillshield.io/report/9d6b797748d74b3c)
Powered by SkillShield