Trust Assessment
voice-assistant received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 1 critical, 1 high, 2 medium, and 1 low severity. Key findings include Unpinned Python dependency version, OpenClaw Agent Executes Shell Commands Based on User Input, LLM System Prompt Configurable via Environment Variable.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 46/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | OpenClaw Agent Executes Shell Commands Based on User Input The `SKILL.md` documentation explicitly states that the OpenClaw agent will execute shell commands (e.g., `cd {baseDir} && cp .env.example .env`, `uv run scripts/server.py`, updating `.env` files, restarting the server) based on natural language commands from the user. This indicates a severe command injection vulnerability where a malicious user could craft prompts to execute arbitrary commands on the host system. The use of `{baseDir}` is particularly concerning if it can be influenced by user input, potentially leading to path traversal or arbitrary file operations. 1. **Sanitize and Validate User Input**: Implement strict validation and sanitization for any user input that is translated into shell commands. Avoid direct execution of user-controlled strings. 2. **Least Privilege**: Ensure the OpenClaw agent runs with the minimum necessary permissions. 3. **Avoid Shell Execution for Configuration**: Instead of shell commands, use programmatic APIs (e.g., Python's `configparser` or `dotenv` library for `.env` files, or dedicated process management libraries) to modify configurations and manage processes. 4. **Restrict `baseDir`**: Ensure `{baseDir}` is a fixed, non-user-controlled, and secure path. | LLM | SKILL.md:71 | |
| HIGH | LLM System Prompt Configurable via Environment Variable The `SYSTEM_PROMPT` for the LLM is loaded from the `VOICE_SYSTEM_PROMPT` environment variable. If an attacker can control this environment variable (e.g., through a compromised host environment or insecure deployment), they can inject arbitrary instructions into the LLM, potentially leading to malicious behavior, data leakage, or manipulation of the LLM's responses. 1. **Restrict Access to Environment Variables**: Ensure that the environment where the skill runs is secured and only trusted administrators can modify environment variables. 2. **Input Validation/Sanitization**: If user-provided input is ever used to set `VOICE_SYSTEM_PROMPT`, it must be rigorously validated and sanitized to prevent injection of malicious instructions. 3. **Hardcode Sensitive Prompts**: For critical system prompts, consider hardcoding them or loading them from a secure, immutable configuration source rather than easily modifiable environment variables. | LLM | scripts/server.py:40 | |
| MEDIUM | Unpinned Python dependency version Dependency 'fastapi>=0.115.0' is not pinned to an exact version. Pin Python dependencies with exact versions where feasible. | Dependencies | skills/charantejmandali18/voice-assistant/pyproject.toml | |
| MEDIUM | Sensitive User Transcripts Logged in Plain Text The skill logs final speech-to-text transcripts using `log.info(f"STT final: {transcript}")`. This means that all user conversations, which may contain sensitive personal information, are written to logs in plain text. If these logs are not properly secured, rotated, or purged, they could be accessed by unauthorized individuals, leading to a data breach. 1. **Redact Sensitive Information**: Implement redaction or masking for potentially sensitive information in transcripts before logging. 2. **Secure Log Management**: Ensure logs are stored securely, with restricted access, encryption at rest, and appropriate retention policies. 3. **Configurable Logging Level**: Provide an option to disable or reduce the verbosity of transcript logging for production environments. 4. **Avoid Logging PII**: Re-evaluate the necessity of logging full transcripts. Consider logging only metadata or anonymized data. | LLM | scripts/server.py:98 | |
| LOW | Dependencies Loosely Pinned in pyproject.toml The `pyproject.toml` file uses `>=` for specifying dependency versions (e.g., `fastapi>=0.115.0`). While this ensures a minimum version, it allows for automatic updates to new major versions which could introduce breaking changes, unexpected behavior, or even new vulnerabilities without explicit review. This increases the risk of supply chain issues if a future version of a dependency introduces a flaw. 1. **Pin Exact Versions**: Use exact version pinning (`==X.Y.Z`) for all production dependencies to ensure reproducible builds and prevent unexpected updates. 2. **Use Compatible Release Operator**: Alternatively, use the compatible release operator (`~=X.Y`) to allow minor version updates while preventing major breaking changes. 3. **Dependency Auditing**: Regularly audit dependencies for known vulnerabilities using tools like `pip-audit` or `Snyk`. | LLM | pyproject.toml:9 |
Scan History
Embed Code
[](https://skillshield.io/report/e75216a7f1e9e9b7)
Powered by SkillShield