Trust Assessment
idealista received a trust score of 68/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Unpinned Git Repository in Skill Installation, Exposure of Sensitive File Paths for Credentials/Tokens, Potential Command Injection via CLI Arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unpinned Git Repository in Skill Installation The skill's manifest specifies installing `idealista-cli` via `git clone` from `https://github.com/quifago/idealista-cli` without pinning to a specific commit hash or tag. This means that any changes to the default branch of the upstream repository will be automatically pulled during installation. A compromise of the `quifago/idealista-cli` repository or an introduction of malicious code by its maintainer could lead to the execution of arbitrary code on the host system without explicit review. Pin the `git clone` operation to a specific commit hash or tag (e.g., `url: 'https://github.com/quifago/idealista-cli#<commit_hash>'` or `url: 'https://github.com/quifago/idealista-cli#v1.2.3'`) to ensure reproducibility and prevent unexpected or malicious code changes from being automatically installed. | LLM | SKILL.md:1 | |
| HIGH | Exposure of Sensitive File Paths for Credentials/Tokens The skill explicitly mentions the file paths where `idealista-cli` stores sensitive information: `~/.config/idealista-cli/config.json` for API keys/secrets and `~/.cache/idealista-cli/token.json` for cached access tokens. While the skill itself doesn't read these files, it makes the LLM agent aware of their location. A malicious prompt could instruct the LLM to read and exfiltrate the contents of these files, leading to credential harvesting or unauthorized access. If possible, avoid storing credentials directly on disk. If necessary, ensure that the LLM agent is strictly sandboxed and prevented from accessing arbitrary file paths, especially those containing sensitive data. Implement robust input validation and output filtering to prevent exfiltration attempts. | LLM | SKILL.md:30 | |
| MEDIUM | Potential Command Injection via CLI Arguments The skill demonstrates the use of `python3 -m idealista_cli` with various arguments (e.g., `--center`, `--distance`, `--operation`, `--property-type`) that are likely to be derived from user input. If the `idealista-cli` tool does not properly sanitize these arguments before passing them to an underlying shell or other system calls, a malicious user input (e.g., from natural language queries) could lead to command injection. For example, an input like `--center '39.594,-0.458' --distance 5000; rm -rf /` could be dangerous if not properly handled by the CLI. Ensure that the `idealista-cli` tool rigorously sanitizes all user-provided arguments to prevent shell metacharacters or other malicious input from being executed. The LLM agent should also be instructed to perform input validation and sanitization on user-provided data before constructing and executing shell commands. | LLM | SKILL.md:42 |
Scan History
Embed Code
[](https://skillshield.io/report/8405dc0bb18f8793)
Powered by SkillShield