Trust Assessment
hybrid-memory received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via Shell Script Arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via Shell Script Arguments The skill demonstrates calling external shell scripts (`graphiti-search.sh`, `graphiti-log.sh`) and passing user-controlled input directly as arguments. If the underlying scripts do not properly sanitize or escape these arguments before execution, a malicious user could inject arbitrary shell commands. For example, a crafted 'query' or 'fact to remember' could execute unintended system commands, leading to data exfiltration, system modification, or denial of service. The underlying `graphiti-search.sh` and `graphiti-log.sh` scripts must rigorously sanitize and escape all user-provided arguments before using them in any shell operations. When implementing such scripts, prefer using safe argument handling techniques (e.g., `printf %q` for quoting in bash, or `subprocess.run` with `shell=False` in Python). The documentation should also include a warning about input sanitization for users implementing or integrating these scripts. | LLM | SKILL.md:30 |
Scan History
Embed Code
[](https://skillshield.io/report/8a449fab95e19cb4)
Powered by SkillShield