Trust Assessment
Joan Workflow received a trust score of 82/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via CLI Arguments, Data Exfiltration via Context Generation.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via CLI Arguments The skill describes various `joan` CLI commands that accept user-provided arguments (e.g., `<workspace-id>`, `<id>`, `<todo-id>`). If the host LLM constructs these commands by directly interpolating untrusted user input into the arguments without proper sanitization, it could lead to command injection. An attacker could craft malicious input to execute arbitrary shell commands. When generating `joan` CLI commands based on user input, ensure all arguments are strictly validated and sanitized to prevent injection of malicious shell commands. Consider using a dedicated command execution utility that handles argument escaping automatically. | LLM | SKILL.md:50 | |
| MEDIUM | Data Exfiltration via Context Generation The command `joan context claude` is described as generating `CLAUDE.md` with 'Joan context' derived from local 'pods' containing 'domain knowledge'. If users store sensitive or proprietary information within these pods, and `CLAUDE.md` is subsequently provided to the LLM as part of its operational context, this constitutes a mechanism for data exfiltration from the user's local file system to the LLM's processing environment. Advise users explicitly about the implications of placing sensitive data in 'pods' that are used for context generation. Implement clear warnings that any content within generated context files (`CLAUDE.md`) may be processed by the LLM and potentially transmitted to external services. Consider adding options to redact or filter sensitive information before context generation. | LLM | SKILL.md:78 |
Scan History
Embed Code
[](https://skillshield.io/report/17d98e5bc5b80231)
Powered by SkillShield