Trust Assessment
game-light-tracker received a trust score of 81/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via PowerShell Script Arguments, Access to Home Assistant API Token from Local File.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via PowerShell Script Arguments The skill's workflow explicitly instructs the host LLM to execute PowerShell scripts (e.g., `game-tracker.ps1`) with arguments derived from user input (e.g., team names, light entity IDs, colors). If these user-provided arguments are not rigorously sanitized by the host LLM before being passed to the PowerShell commands, a malicious user could inject arbitrary commands, leading to remote code execution. This is a direct instruction for the LLM to construct and execute a command with user-controlled parts. The host LLM must implement robust input validation and sanitization for all user-provided arguments before constructing and executing PowerShell commands. Arguments should be properly quoted and escaped to prevent command injection. Consider using a safer execution mechanism that separates commands from arguments, or a dedicated API for script execution with strict parameter validation. | LLM | SKILL.md:99 | |
| MEDIUM | Access to Home Assistant API Token from Local File The skill's workflow explicitly instructs the host LLM to read a Home Assistant API token from `.homeassistant-config.json` using `Get-Content`. While necessary for the skill's functionality, this makes a sensitive credential accessible to the skill's execution environment. If combined with a command injection vulnerability or another flaw, this token could be exfiltrated or misused. The skill's design requires direct file system access to a sensitive configuration file. Store sensitive credentials like API tokens in a secure, isolated secrets management system (e.g., environment variables, dedicated secrets vault) rather than directly in local files accessible by the skill. Ensure the skill's execution environment has minimal necessary permissions (least privilege) and that the file itself has restricted access. | LLM | SKILL.md:92 |
Scan History
Embed Code
[](https://skillshield.io/report/a593f76a732a5175)
Powered by SkillShield