Trust Assessment
telnyx-rag received a trust score of 64/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 7 findings: 0 critical, 0 high, 6 medium, and 0 low severity. Key findings include Suspicious import: urllib.request, Potential Prompt Injection via User Content to Telnyx LLM, Data Exfiltration to Telnyx API Endpoints.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings7
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/dotcom-squad/telnyx-rag/ask.py:20 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/dotcom-squad/telnyx-rag/search.py:18 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/dotcom-squad/telnyx-rag/sync.py:29 | |
| MEDIUM | Potential Prompt Injection via User Content to Telnyx LLM The `ask.py` script constructs prompts for the Telnyx LLM by directly incorporating user queries and retrieved context from the agent's local files. If these files or the user's query contain malicious instructions (e.g., 'ignore previous instructions', 'reveal system prompts'), they could manipulate the behavior of the Telnyx LLM. While this is an inherent risk in RAG systems, the skill does not implement specific sanitization or instruction-following mitigation for the retrieved context or query before sending it to the external LLM. Implement robust input sanitization or instruction-following mitigation techniques for both the user query and retrieved context before constructing the prompt for the external LLM. Consider using LLM-specific safety features or prompt templating that isolates user input from system instructions. Educate users about the risks of storing malicious instructions in their indexed files. | LLM | ask.py:350 | |
| MEDIUM | Command Injection via Sourcing .env File The `setup.sh` script uses the `source` command to load environment variables from the `.env` file. If an attacker can write arbitrary shell commands into the `.env` file, these commands would be executed by the shell when `setup.sh` is run, leading to arbitrary code execution. While the `.env` file is typically user-controlled, this represents a potential command injection vulnerability if the file's integrity is compromised. Avoid using `source` for `.env` files in shell scripts if possible. Instead, parse the `.env` file line by line to extract key-value pairs, ensuring that only variable assignments are processed and no arbitrary commands are executed. For example, use `grep -E '^[^#]+=' "$SCRIPT_DIR/.env" | xargs -I {} export {}` or a Python script to load the environment variables more safely. | LLM | setup.sh:130 | |
| MEDIUM | Excessive Filesystem Scope for RAG Indexing The `sync.py` script, by default, is configured to scan a broad `workspace` (`/home/node/clawd`) and includes extensive `patterns` (e.g., `memory/*.md`, `knowledge/*.json`, `skills/*/SKILL.md`, `docs/*.md`). While this broad scope is intended for a RAG skill to index relevant agent memory, it grants the skill access to read a significant portion of the agent's local filesystem. If the skill were compromised, this broad access could be exploited to exfiltrate sensitive local files not intended for indexing. Advise users to review and narrow the `workspace` and `patterns` configuration in `config.json` to include only files and directories strictly necessary for the RAG functionality. Implement a clear warning during setup or in documentation about the broad filesystem access and the implications of indexing sensitive data. | LLM | sync.py:30 | |
| INFO | Data Exfiltration to Telnyx API Endpoints The skill's core functionality involves sending user queries, local file content (after chunking), and bucket names to Telnyx API endpoints (e.g., for similarity search, embeddings, and LLM completions). This is expected behavior for a RAG skill utilizing Telnyx services. Users should be aware that their data is transmitted to and processed by Telnyx. Ensure users are fully aware that their data is being sent to Telnyx. Provide clear documentation on Telnyx's data handling and privacy policies. No direct code remediation is needed as this is intended functionality. | LLM | ask.py:100 |
Scan History
Embed Code
[](https://skillshield.io/report/e8152c97beca660a)
Powered by SkillShield