Security Audit
voice-ai-engine-development
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
voice-ai-engine-development received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Untrusted user input directly used in LLM prompt construction.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Untrusted user input directly used in LLM prompt construction The `GeminiAgent` implementation in the provided examples constructs LLM prompts by directly incorporating `user_input` (derived from untrusted speech transcription) and `system_prompt` (potentially from untrusted configuration) into the conversation history and the prompt sent to the LLM. This allows an attacker to manipulate the LLM's behavior, potentially leading to unintended responses, data leakage, or execution of unauthorized actions if the LLM is integrated with external tools. The code explicitly shows `self.conversation_history.append(Message(role="user", content=user_input))` and `contents.append({"role": "user", "parts": [{"text": f"System Instruction: {self.system_prompt}"}]})` which are then used to build the LLM's input. This pattern is present in `examples/gemini_agent_example.py` and `examples/complete_voice_engine.py`. Implement robust prompt injection defenses. This typically involves: 1. **Input Sanitization/Validation:** While difficult for natural language, consider filtering out known malicious keywords or patterns if applicable. 2. **Privilege Separation:** Limit the LLM's access to sensitive tools or data. 3. **Output Validation:** Validate LLM outputs before acting on them. 4. **Instruction Tuning/Guardrails:** Use system prompts and fine-tuning to make the LLM more resilient to adversarial inputs. 5. **Contextual Separation:** Clearly delineate user input from system instructions within the prompt using distinct markers or separate API parameters if the LLM provider supports it. For example, instead of `f"System: {system_prompt}\nUser: {user_input}"`, use structured inputs where `system_prompt` is passed as a dedicated system message and `user_input` as a user message. 6. **Human-in-the-loop:** For critical actions, require human confirmation. | LLM | examples/gemini_agent_example.py:40 |
Scan History
Embed Code
[](https://skillshield.io/report/07e71672829bf8fb)
Powered by SkillShield