Trust Assessment
hostinger received a trust score of 80/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 4 findings: 0 critical, 0 high, 3 medium, and 0 low severity. Key findings include Unsafe deserialization / dynamic eval, Suspicious import: requests, Potential arbitrary file read via DNS record update/validation.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/rexlunae/hostinger/scripts/hostinger.py:5 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/rexlunae/hostinger/scripts/hostinger.py:16 | |
| MEDIUM | Potential arbitrary file read via DNS record update/validation The `dns_update` and `dns_validate` commands in `scripts/hostinger.py` read a JSON file specified by the `args.records_file` argument. If the AI agent can be prompted to provide an arbitrary file path (e.g., `/etc/passwd`, `/root/.ssh/id_rsa`) instead of a legitimate DNS records file, the skill will attempt to read the content of that file. If the file content is valid JSON, it will be processed and potentially printed to standard output, leading to data exfiltration. Even if not valid JSON, the attempt to read the file could be a concern. Implement stricter validation for `args.records_file` to ensure it points to expected file types or locations, or restrict the LLM's ability to provide arbitrary file paths for this argument. Consider adding a `try-except` block around `json.load` to handle `json.JSONDecodeError` gracefully and prevent unintended output of non-JSON file contents. | LLM | scripts/hostinger.py:129 | |
| INFO | Skill handles sensitive API token The skill reads an API token from `~/.config/hostinger/token` to authenticate with the Hostinger API. While this is necessary for the skill's functionality and the token is used correctly in API requests to the hardcoded Hostinger API endpoint, the presence of this sensitive credential makes the skill a target for potential data exfiltration if the skill itself were compromised or if there were a vulnerability in its handling of the token (e.g., logging it). The current code does not show explicit mishandling, but it's important to note the sensitive nature of the data. Ensure the environment where the skill runs is secure. Implement robust logging practices to avoid inadvertently logging the token. Consider using environment variables or a secure secrets management system instead of a plain file for tokens in production environments, although for a CLI tool, a config file is common. | LLM | scripts/hostinger.py:20 |
Scan History
Embed Code
[](https://skillshield.io/report/1a5cf8fea0eeca9d)
Powered by SkillShield