Trust Assessment
bambu-local received a trust score of 81/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Insecure TLS Configuration for MQTT Connection, Arbitrary G-code Execution Capability.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Insecure TLS Configuration for MQTT Connection The skill disables TLS certificate validation for the MQTT connection by setting `cert_reqs=ssl.CERT_NONE` and explicitly allowing insecure TLS with `client.tls_insecure_set(True)`. This makes the connection vulnerable to Man-in-the-Middle (MitM) attacks. An attacker on the local network could intercept, read, or modify commands sent to the printer, and potentially capture sensitive information like the `ACCESS_CODE` if it's transmitted over the unverified channel. Configure proper TLS certificate validation. If the printer uses self-signed certificates, provide a mechanism to trust them (e.g., by specifying a CA certificate file). Avoid `ssl.CERT_NONE` and `client.tls_insecure_set(True)` in production environments. | LLM | bambu.py:29 | |
| MEDIUM | Arbitrary G-code Execution Capability The `send_gcode` function allows arbitrary G-code strings to be passed directly to the 3D printer via MQTT without any sanitization or validation. While this is an intended feature for advanced control, it presents a significant risk if the AI agent or a malicious user provides G-code that could cause physical damage to the printer, create unsafe conditions (e.g., extreme temperatures, rapid uncontrolled movements), or perform unexpected operations. An LLM could be prompted to generate such harmful G-code. Implement a whitelist or validation mechanism for G-code commands, especially for critical or potentially destructive ones. Provide clear warnings about the dangers of arbitrary G-code execution. For AI agent integration, consider adding a human-in-the-loop confirmation step for G-code commands or restricting the LLM's ability to generate arbitrary G-code. | LLM | bambu.py:108 |
Scan History
Embed Code
[](https://skillshield.io/report/46272c6e26946c37)
Powered by SkillShield