Trust Assessment
logseq received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via Git Execution, Potential Data Exfiltration and Excessive Local Data Access.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via Git Execution The `logseq.Git.execCommand(args)` method allows the execution of arbitrary Git commands. If the `args` parameter can be influenced by untrusted input (e.g., through a prompt injection attack on the LLM agent), an attacker could execute arbitrary shell commands on the host system where Logseq is running. This poses a severe risk of remote code execution or data manipulation. Restrict the `execCommand` method to a very limited set of safe, pre-defined Git operations, or remove it entirely if not strictly necessary. Implement rigorous input validation and sanitization for any arguments passed to this method to prevent shell injection. Consider sandboxing the execution environment. | LLM | SKILL.md:60 | |
| HIGH | Potential Data Exfiltration and Excessive Local Data Access The skill provides access to sensitive local user data and system information through methods like `logseq.App.getUserConfigs()`, `logseq.App.getCurrentGraph()`, and `logseq.Assets.listFilesOfCurrentGraph(path)`. `getUserConfigs()` can expose user preferences, `getCurrentGraph()` can reveal the local file path of the Logseq graph, and `listFilesOfCurrentGraph()` can list local files within the graph's asset directory. Combined with extensive read access to Logseq content (pages, blocks, Datalog queries), this creates a high risk of data exfiltration if the LLM agent is compromised or tricked into revealing this information externally. Review the necessity of exposing full user configurations and local graph paths. Implement strict access controls and data sanitization for any information retrieved by these methods, especially if it might be exposed to external systems or LLM outputs. For `listFilesOfCurrentGraph`, ensure robust path traversal prevention and restrict access to only explicitly allowed directories. | LLM | SKILL.md:30 |
Scan History
Embed Code
[](https://skillshield.io/report/3ff6c28b127af362)
Powered by SkillShield