Security Audit
planning-workflow
github.com/Mrc220/agent_flywheel_clawdbot_skills_and_integrationsTrust Assessment
planning-workflow received a trust score of 94/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 1 finding: 0 critical, 0 high, 1 medium, and 0 low severity. Key findings include Workflow exposes target LLMs to prompt injection.
The analysis covered 4 layers: dependency_graph, static_code_analysis, manifest_analysis, llm_behavioral_safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 11, 2026 (commit c7bd8e0f). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Workflow exposes target LLMs to prompt injection The skill describes a workflow that instructs users to paste potentially untrusted or user-controlled 'plans' directly into prompts for other Large Language Models (LLMs) such as GPT Pro and Claude Code. This pattern, where user-controlled content is concatenated directly with system instructions, creates a significant risk of prompt injection against the target LLMs. A malicious 'plan' could contain instructions designed to override the LLM's original directives, extract sensitive information, or generate undesirable content from the target LLM. Implement robust input sanitization or use structured input methods (e.g., XML tags, JSON) to clearly separate user-provided plan content from system instructions within the prompts. For example, instead of direct concatenation, encapsulate the user's plan within a defined structure like `<plan_content>...</plan_content>` to make it harder for malicious instructions within the plan to escape their intended context and inject the target LLM. This applies to all instances where user-controlled or LLM-generated content is inserted into a prompt for another LLM (e.g., lines 54, 69, 90). | Unknown | SKILL.md:54 |
Scan History
Embed Code
[](https://skillshield.io/report/d4fa0c727f514abc)
Powered by SkillShield