Trust Assessment
aimlapi-llm-reasoning received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 1 critical, 1 high, 3 medium, and 0 low severity. Key findings include Suspicious import: urllib.request, Direct user input to LLM prompt, User-controlled API endpoint for sensitive data.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 41/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Direct user input to LLM prompt The `--system` and `--user` arguments are directly inserted into the 'content' field of the messages sent to the LLM API. This allows an attacker to craft malicious prompts, leading to prompt injection where the LLM's behavior can be manipulated to generate harmful content, ignore instructions, or perform unintended actions. Implement robust prompt sanitization or validation for user-provided `--system` and `--user` inputs. Consider using a templating system that clearly separates user input from system instructions, or escaping special characters relevant to the LLM's parsing. | LLM | scripts/run_chat.py:60 | |
| HIGH | User-controlled API endpoint for sensitive data The `--base-url` argument allows the user to specify an arbitrary API endpoint. The script then sends the `AIMLAPI_API_KEY` (retrieved from environment variables) and the full LLM request payload (which may contain sensitive user data) to this user-controlled URL. This creates a direct path for data exfiltration and credential harvesting, as an attacker can redirect the API call to a malicious server. Restrict the `--base-url` argument to a whitelist of trusted endpoints. If dynamic endpoints are necessary, implement strict validation to ensure they belong to a trusted domain or remove the ability to specify it via command line, relying only on secure configuration. | LLM | scripts/run_chat.py:67 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/d1m7asis/aimlapi-llm-reasoning/scripts/run_chat.py:6 | |
| MEDIUM | Arbitrary file write via --output argument The `--output` argument allows the user to specify an arbitrary file path where the full JSON response from the LLM API will be written. An attacker could exploit this to overwrite critical system files, write sensitive LLM output to an accessible location, or fill up disk space, leading to denial of service or data leakage. Restrict the `--output` path to a designated safe directory (e.g., a temporary directory or a user-specific output folder). Implement path sanitization to prevent directory traversal attacks (e.g., `../`). Consider adding a confirmation prompt before overwriting existing files. | LLM | scripts/run_chat.py:71 | |
| MEDIUM | Arbitrary JSON injection into LLM payload The `--extra-json` argument allows injecting arbitrary JSON key-value pairs directly into the LLM API request payload. While `json.loads` prevents direct code injection, if the underlying LLM API supports sensitive or control-plane parameters (e.g., `tools`, `functions`, `stop_sequences`, `temperature` manipulation beyond safe bounds), an attacker could use this to manipulate the LLM's behavior, bypass safety mechanisms, or influence tool execution. Implement strict validation for the keys and values allowed within the `--extra-json` input. Only permit known, safe parameters and their expected value types. Reject any unknown or potentially dangerous parameters to prevent LLM manipulation. | LLM | scripts/run_chat.py:65 |
Scan History
Embed Code
[](https://skillshield.io/report/72aa73a9eb3f4f32)
Powered by SkillShield