Trust Assessment
compound-engineering received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 2 high, 1 medium, and 0 low severity. Key findings include Self-modifying agent susceptible to persistent prompt injection, Sensitive data exfiltration through public or compromised Git repository, Agent granted broad read/write access to sensitive data and version control.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Self-modifying agent susceptible to persistent prompt injection The skill describes an AI agent designed to 'Extract learnings and patterns' from 'sessions' (which are untrusted input) and 'Update MEMORY.md and AGENTS.md'. These updated files then serve as future instructions for the agent. If malicious content is present in the 'sessions', the agent could inject harmful instructions into its own memory/instruction files, leading to persistent prompt injection and compromise of the agent's future behavior. This is a fundamental design vulnerability for self-modifying agents processing untrusted input. Implement robust input sanitization and validation for 'sessions' before processing. Isolate the agent's instruction modification capabilities. Review and approve changes to AGENTS.md and MEMORY.md manually or through a trusted process before they are adopted by the agent. Consider sandboxing the agent's execution environment. | LLM | SKILL.md:30 | |
| HIGH | Sensitive data exfiltration through public or compromised Git repository The skill explicitly instructs the agent to 'Commit and push changes' to a Git repository for `MEMORY.md` and `AGENTS.md`. These files are designed to store 'learnings', 'patterns', 'gotchas', 'preferences', and 'decisions' extracted from user 'sessions'. If these sessions contain sensitive user data (e.g., API keys, personal information, project details), and the Git repository is public or becomes compromised, this sensitive information will be exfiltrated. Ensure the Git repository used for memory files is strictly private and secured with strong access controls. Implement redaction or filtering of sensitive data before it is written to memory files. Regularly audit memory files for sensitive information. | LLM | SKILL.md:33 | |
| HIGH | Agent granted broad read/write access to sensitive data and version control The described agent workflow requires broad permissions, including the ability to 'Scan all sessions from last 24h' (implying extensive read access to potentially sensitive user data) and to 'Commit and push changes' to a Git repository (implying write access to the filesystem and remote version control). This level of access, especially for a self-modifying agent processing untrusted input, creates a significant attack surface. A compromised agent could read sensitive data, modify its own instructions, or push malicious code/data to the repository. Implement the principle of least privilege for the agent's execution environment. Strictly define and limit the scope of 'sessions' it can access. Sandbox the agent's operations, especially file system and network access. Use dedicated, limited-privilege Git credentials for the agent. | LLM | SKILL.md:31 | |
| MEDIUM | Unpinned `npx` dependency introduces supply chain risk The 'Quick Start' guide instructs users to run `npx compound-engineering review` without specifying a version. `npx` will fetch and execute the latest available version of the `compound-engineering` package. This introduces a supply chain risk, as a malicious actor could publish a compromised version of the package, which would then be automatically executed by users following the guide. Always pin package versions when instructing users to install or run dependencies. For `npx`, this can be done by specifying the version, e.g., `npx compound-engineering@1.0.0 review`. Regularly audit dependencies for known vulnerabilities. | LLM | SKILL.md:20 |
Scan History
Embed Code
[](https://skillshield.io/report/3c249487363bdd7d)
Powered by SkillShield