Trust Assessment
lightrag received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 1 high, 2 medium, and 0 low severity. Key findings include Suspicious import: urllib.request, API Key stored in plain text configuration file, User-controlled API endpoint allows data exfiltration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | SSL/TLS certificate verification explicitly disabled The script explicitly disables SSL/TLS certificate verification (`ctx.check_hostname = False` and `ctx.verify_mode = ssl.CERT_NONE`). This makes all network communication, including the transmission of `query_text` and `api_key`, vulnerable to Man-in-the-Middle (MitM) attacks. An attacker could intercept, read, or modify data in transit without detection, compromising data integrity and confidentiality. Remove the lines that disable SSL/TLS certificate verification. Always use proper certificate validation to ensure secure communication with the API server. If self-signed certificates are used in a controlled environment, provide a mechanism to trust specific certificates securely. | LLM | scripts/query_lightrag.py:60 | |
| HIGH | User-controlled API endpoint allows data exfiltration The skill allows users to configure an arbitrary `url` for the LightRAG API server via the `config` command. This means a malicious actor could configure the skill to send user queries (`query_text`) and potentially API keys to an attacker-controlled server, leading to data exfiltration. If possible, restrict the allowed URLs to a predefined whitelist. If arbitrary URLs are necessary, clearly warn users about the security implications of connecting to untrusted endpoints and ensure that sensitive data like API keys are not sent to untrusted URLs. | LLM | scripts/query_lightrag.py:50 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/ruslanlanket/lightrag/scripts/query_lightrag.py:6 | |
| MEDIUM | API Key stored in plain text configuration file The skill stores API keys in a plain text JSON file (`~/.lightrag_config.json`). This makes the API key vulnerable to unauthorized access by other processes or users on the same system, potentially leading to credential harvesting if the file permissions are not strictly controlled. Implement a more secure method for storing sensitive credentials, such as using environment variables, a dedicated secrets management service, or an encrypted configuration file. Ensure the configuration file has strict file permissions (e.g., `chmod 600`). | LLM | scripts/query_lightrag.py:30 |
Scan History
Embed Code
[](https://skillshield.io/report/e4082d6d76f0ed8e)
Powered by SkillShield