Trust Assessment
engram received a trust score of 88/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via User-Controlled Arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via User-Controlled Arguments The skill instructs the agent to execute local `engram` commands, passing user-controlled strings (e.g., search queries, memory content, text for extraction) as arguments. If the `engram` binary or the agent's execution environment does not properly sanitize or escape these arguments before internal shell execution (e.g., using `subprocess.run(..., shell=True)`), an attacker could inject arbitrary shell commands. For example, a malicious user input like `'; rm -rf /'` could lead to remote code execution on the host system. Ensure that the `engram` binary (or any wrapper executing it) properly sanitizes and escapes all user-provided arguments before passing them to a shell or executing them. The safest approach is to use argument lists (e.g., `subprocess.run(['engram', 'search', user_input], shell=False)`) instead of concatenating user input directly into a shell command string. | LLM | SKILL.md:15 |
Scan History
Embed Code
[](https://skillshield.io/report/e64b096f07c3d157)
Powered by SkillShield