Trust Assessment
legacy-testimony received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 2 critical, 3 high, 0 medium, and 0 low severity. Key findings include Prompt Injection in Ghost Agent Sub-Agent Prompt, Ghost Agent Sub-Agent Has Access to Decrypted Sensitive Data, Public Blast Message Susceptible to Command Injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Ghost Agent Sub-Agent Has Access to Decrypted Sensitive Data The `activateGhostAgent` function spawns a sub-agent with explicit instructions to 'Answer questions based on the provided legacy data.' This 'legacy data' includes decrypted passwords, files, crypto assets, and messages, as defined by the `PackageContent` interface and skill description. If the sub-agent's behavior can be manipulated (e.g., via prompt injection), it could be coerced into revealing or exfiltrating this highly sensitive information to unauthorized parties. Implement strict guardrails and content filtering for the Ghost Agent's responses, especially when dealing with sensitive data. Re-evaluate if the sub-agent truly needs direct access to raw, decrypted sensitive data. Consider redacting or anonymizing parts of the 'legacy data' before it's accessible to the LLM, or provide only summaries/metadata. | LLM | scripts/legacy.ts:140 | |
| CRITICAL | High Risk of Credential Harvesting/Exfiltration during Crypto Asset Sweep The skill's description explicitly mentions 'Crypto Asset Sweep: Automatically transfer funds from agent wallets to a safety address.' The `trigger` function is responsible for delivering packages, which can include `crypto_sweep` types. This implies the skill will handle and potentially decrypt private keys, seed phrases, or other highly sensitive cryptocurrency credentials. If the skill's execution environment is compromised, or if the delivery mechanism is insecure, these credentials could be harvested or exfiltrated, leading to irreversible loss of funds. Implement extremely robust security measures for handling cryptocurrency credentials. This includes hardware-level isolation, multi-party computation (MPC), or requiring manual confirmation for transfers. Avoid storing raw private keys if possible. Ensure the 'safety address' is immutable and verified. Any interaction with external crypto APIs should be thoroughly secured and audited. | LLM | scripts/legacy.ts:168 | |
| HIGH | Prompt Injection in Ghost Agent Sub-Agent Prompt The `activateGhostAgent` function constructs a prompt for a sub-agent using `loadConfig()?.owner.name`. If an attacker can manipulate the `owner.name` field in the `config.json` file (e.g., through a compromised configuration or user input), they could inject malicious instructions into the sub-agent's prompt. This could lead to the sub-agent performing unauthorized actions or revealing sensitive information, especially since it's instructed to 'Answer questions based on the provided legacy data'. Sanitize or strictly validate `owner.name` before using it directly in an LLM prompt. Consider using a templating mechanism that separates data from instructions, or pass `owner.name` as a tool parameter rather than directly into the prompt to prevent prompt injection. | LLM | scripts/legacy.ts:140 | |
| HIGH | Public Blast Message Susceptible to Command Injection The `publicBlast` function's comment `// Implementation would call Moltbook API and Bird CLI` indicates that the `message` string will be used in external command-line calls. If the `message` (which is user-defined via `legacy set-blast`) is not properly sanitized or escaped before being passed to `spawn` or similar shell execution functions, an attacker who can modify the `config.json` file could inject arbitrary shell commands. This could lead to remote code execution, data exfiltration, or denial of service. When executing external commands with user-provided input, always use an array form for `spawn` (e.g., `spawn('command', ['-arg', message])`) to prevent shell interpretation. Ensure the `message` is thoroughly sanitized and validated against malicious characters or patterns. | LLM | scripts/legacy.ts:153 | |
| HIGH | Protocol Omega Allows Recursive Deletion of Potentially User-Controlled Directory The 'Protocol Omega (Self-Destruct)' feature, mentioned in `SKILL.md` and implied by `config.owner.wipeAfterDelivery`, will likely use `rmSync(LEGACY_DIR, { recursive: true, force: true })`. While `LEGACY_DIR` defaults to `~/.legacy`, it can be overridden by `process.env.LEGACY_DIR`. If an attacker can manipulate this environment variable or the `LEGACY_DIR` path, they could potentially trigger the recursive deletion of arbitrary directories on the agent's system, leading to data loss or system instability. Strictly control the `LEGACY_DIR` path, ensuring it cannot be manipulated by untrusted input or environment variables. If `process.env.LEGACY_DIR` is allowed, validate it rigorously to ensure it points only to an allowed, isolated directory. Consider using a more granular deletion mechanism if possible, rather than a recursive wipe of a potentially user-controlled directory. | LLM | scripts/legacy.ts:20 |
Scan History
Embed Code
[](https://skillshield.io/report/db81dc4a98fe9006)
Powered by SkillShield