Trust Assessment
agent-doppelganger received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 3 critical, 1 high, 0 medium, and 0 low severity. Key findings include Persistence / self-modification instructions, Direct Echo of Untrusted Input Bypasses Policy Controls.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 10/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Persistence / self-modification instructions Shell RC file modification for persistence Remove any persistence mechanisms. Skills should not modify system startup configurations, crontabs, LaunchAgents, systemd services, or shell profiles. | Manifest | skills/sieershafilone/agent-doppelganger/scripts/adg.py:10 | |
| CRITICAL | Persistence / self-modification instructions Shell RC file modification for persistence Remove any persistence mechanisms. Skills should not modify system startup configurations, crontabs, LaunchAgents, systemd services, or shell profiles. | Manifest | skills/sieershafilone/agent-doppelganger/scripts/adg.py:144 | |
| CRITICAL | Persistence / self-modification instructions Shell RC file modification for persistence Remove any persistence mechanisms. Skills should not modify system startup configurations, crontabs, LaunchAgents, systemd services, or shell profiles. | Manifest | skills/sieershafilone/agent-doppelganger/scripts/adg.py:148 | |
| HIGH | Direct Echo of Untrusted Input Bypasses Policy Controls The `generate_response` method, when `response_style` is 'match_language', directly includes the untrusted `message` in the returned string. Although marked as a placeholder for LLM interaction, this design allows an attacker to inject instructions into the `message` that could be executed by a downstream LLM. This bypasses the skill's stated policy evaluation, which is intended to occur *before* generation to prevent prompt injection. Ensure that untrusted user input (`message`) is never directly included in prompts or responses that will be processed by an LLM without proper sanitization, escaping, or a robust policy-based filtering mechanism applied *before* inclusion. The placeholder should be replaced with an actual LLM call that processes the message securely, or the message should be passed as a separate, untrusted parameter to the LLM, not as part of the system prompt. | LLM | scripts/adg.py:109 |
Scan History
Embed Code
[](https://skillshield.io/report/43abf1dcb8242dae)
Powered by SkillShield