Trust Assessment
compound-engineering received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 3 high, 0 medium, and 0 low severity. Key findings include Unpinned External Dependency Execution, Potential Data Exfiltration via Git Commit/Push, Self-Modifying Agent Vulnerable to Prompt Injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unpinned External Dependency Execution The skill instructs users to execute an external npm package (`compound-engineering`) using `npx` without specifying a version. This means `npx` will fetch and execute the latest available version. If the `compound-engineering` package on npm is compromised or a malicious update is published, users following this instruction would be vulnerable to executing arbitrary malicious code. This introduces a significant supply chain risk. Recommend pinning the version of the `compound-engineering` package when instructing users to run it, e.g., `npx compound-engineering@1.0.0 review`. This ensures a known, tested version is used, mitigating risks from future malicious updates. | LLM | SKILL.md:16 | |
| HIGH | Potential Data Exfiltration via Git Commit/Push The 'Nightly Review' loop explicitly instructs the agent to 'Commit and push changes' after extracting learnings, patterns, preferences, and decisions from 'sessions' and updating `MEMORY.md` and `AGENTS.md`. If these memory files contain sensitive user data or proprietary information extracted from the sessions, and the agent is configured to push to a public or insecure Git repository, this could lead to unauthorized data exfiltration. The skill does not provide mechanisms to redact or sanitize sensitive information before committing and pushing. Implement robust data sanitization or redaction mechanisms for `MEMORY.md` and `AGENTS.md` before they are committed and pushed. Users should be explicitly warned about the risks of pushing sensitive data to public repositories and advised to configure their Git remotes appropriately (e.g., private repositories only). | LLM | SKILL.md:36 | |
| HIGH | Self-Modifying Agent Vulnerable to Prompt Injection The core functionality of this skill involves the AI agent reviewing its own past 'sessions' and updating its 'instructions' or 'memory' (`MEMORY.md`, `AGENTS.md`) based on extracted 'learnings'. If the content of these 'sessions' (e.g., user chat history, task outputs) contains malicious or manipulative prompts, these could be incorporated into the agent's long-term memory. This 'memory poisoning' could lead to the agent exhibiting undesirable or harmful behaviors in the future, effectively making it vulnerable to prompt injection from its own processed data. Implement strong validation and sanitization of extracted 'learnings' before they are written to the agent's memory files. The agent should be explicitly instructed or trained to identify and reject manipulative or harmful instructions during the review process. Consider human-in-the-loop review for significant memory updates, especially for `AGENTS.md`. | LLM | SKILL.md:34 |
Scan History
Embed Code
[](https://skillshield.io/report/5d59d90b8776ce9f)
Powered by SkillShield