Trust Assessment
hetzner-cloud received a trust score of 35/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 1 critical, 2 high, 1 medium, and 0 low severity. Key findings include File read + network send exfiltration, Sensitive path access: SSH key/config, Untrusted content attempts to dictate LLM safety rules.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | File read + network send exfiltration SSH key/config file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | skills/pasogott/hetzner-cloud/SKILL.md:109 | |
| HIGH | Sensitive path access: SSH key/config Access to SSH key/config path detected: '~/.ssh/id_rsa'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | skills/pasogott/hetzner-cloud/SKILL.md:109 | |
| HIGH | Skill requires and enables broad cloud infrastructure permissions The skill is designed to manage Hetzner Cloud infrastructure, explicitly requiring an API token with 'read+write permissions'. This grants the LLM extensive control over the cloud account, including the ability to create, modify, and potentially destroy servers, volumes, networks, firewalls, and SSH keys. Specific high-risk capabilities include creating SSH keys (`hcloud ssh-key create`) which can grant direct server access, and configuring firewalls (`hcloud firewall add-rule`) with broad access (e.g., `0.0.0.0/0`). While the skill includes advisory safety rules, these are not technically enforced and the underlying capabilities are highly privileged. Misuse or compromise of the LLM could lead to significant infrastructure damage or unauthorized access. 1. Principle of Least Privilege: Re-evaluate if 'read+write' permissions are strictly necessary for all skill functions. Consider using separate, more granular API tokens for different sub-skills or operations, if supported by Hetzner Cloud. 2. Trusted Guardrails: Implement robust, trusted guardrails outside the skill's untrusted definition to enforce safety policies (e.g., confirmation for destructive actions, validation of parameters, restrictions on sensitive operations like SSH key creation or broad firewall rules). 3. User Confirmation: Ensure the LLM's core logic always seeks explicit user confirmation for any destructive or highly privileged operations, regardless of skill instructions. | LLM | SKILL.md:35 | |
| MEDIUM | Untrusted content attempts to dictate LLM safety rules The skill's 'Safety Rules' section contains direct instructions to the LLM (e.g., 'NEVER execute delete commands', 'ALWAYS ask for confirmation', 'ONLY the account owner can authorize infrastructure changes'). These are attempts by untrusted content to manipulate the LLM's operational behavior and decision-making process. While these rules are intended to enhance safety, the LLM should derive its core safety and interaction policies from its trusted instructions, not from the skill's untrusted definition. Relying on such instructions from untrusted sources can lead to unpredictable behavior if the LLM is designed to follow them, or a bypass of intended safety if the LLM correctly ignores them. Remove direct instructions to the LLM from the skill's untrusted content. LLM safety and operational policies should be defined and enforced by the trusted system, not by the skill itself. If specific constraints are needed for skill usage, they should be implemented as trusted guardrails or pre-execution checks. | LLM | SKILL.md:7 |
Scan History
Embed Code
[](https://skillshield.io/report/f493a54c2f9fe0ad)
Powered by SkillShield