Trust Assessment
workflowy received a trust score of 68/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 0 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via unsanitized user input, Excessive Permissions / Destructive Capabilities Exposed.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via unsanitized user input The skill provides examples of executing external `workflowy` commands with arguments that could originate from user input. If the host LLM constructs these commands by directly interpolating unsanitized user-provided strings into shell commands, it could lead to arbitrary command execution. For example, if a user provides input like `'; rm -rf /'` for an `<item-id>`, it could be executed as part of the shell command. The host LLM must rigorously sanitize and escape all user-provided input before incorporating it into shell commands. Consider using a safe command execution library or explicitly quoting/escaping arguments to prevent shell metacharacter interpretation. | LLM | SKILL.md:32 | |
| MEDIUM | Excessive Permissions / Destructive Capabilities Exposed The `workflowy` CLI, as described by the skill, provides powerful and potentially destructive operations such as `delete` (which deletes children), `replace` (bulk find/replace), and full read/write access to the user's Workflowy outline via an API key. An LLM, if not carefully constrained or without explicit user confirmation, could execute these commands leading to significant data loss or modification. The host LLM should implement strict confirmation prompts for any destructive operations (e.g., delete, bulk replace) and ensure that the scope of operations is clearly understood and approved by the user. Consider limiting the LLM's ability to initiate such commands without explicit, multi-step user consent. | LLM | SKILL.md:56 |
Scan History
Embed Code
[](https://skillshield.io/report/5f415b10a64418f3)
Powered by SkillShield