Security Audit
textcortex-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
textcortex-automation received a trust score of 70/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include Potential Prompt Injection into Downstream LLM, Excessive Permissions via RUBE_REMOTE_WORKBENCH, Potential Data Exfiltration via Textcortex Processing.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Prompt Injection into Downstream LLM The skill facilitates interaction with Textcortex, an AI writing assistant, via Rube MCP tools. The examples provided, such as `queries: [{use_case: "your specific Textcortex task"}]` and `arguments: {/* schema-compliant args from search results */}` for `RUBE_SEARCH_TOOLS` and `RUBE_MULTI_EXECUTE_TOOL`, indicate that user-controlled input can be passed to Textcortex. If Textcortex is an LLM-based service, this creates a vector for prompt injection attacks where a malicious user could craft input to manipulate Textcortex's behavior, potentially leading to unintended content generation, data exposure, or other undesirable outcomes. Implement strict input validation and sanitization for all user-controlled data passed to Textcortex operations. Consider using allow-lists for `use_case` parameters and carefully review argument schemas. Ensure Textcortex itself has robust prompt injection defenses. | LLM | SKILL.md:40 | |
| HIGH | Excessive Permissions via RUBE_REMOTE_WORKBENCH The skill exposes `RUBE_REMOTE_WORKBENCH` for 'Bulk ops' using `run_composio_tool()`. The term 'workbench' and the generic nature of `run_composio_tool()` suggest a highly flexible and potentially unconstrained execution environment. Without explicit sandboxing or limitations described, this tool could allow the LLM to execute arbitrary Composio tools or even arbitrary code, potentially exceeding the intended scope of Textcortex automation and leading to unauthorized actions or resource access. Clarify and restrict the capabilities of `RUBE_REMOTE_WORKBENCH` and `run_composio_tool()`. Ensure it operates within a strictly defined sandbox and can only perform actions directly relevant to Textcortex automation. Provide detailed documentation on its limitations and security implications. | LLM | SKILL.md:67 | |
| MEDIUM | Potential Data Exfiltration via Textcortex Processing The skill allows the LLM to send data to Textcortex via `RUBE_MULTI_EXECUTE_TOOL` for processing. If sensitive user data is included in the `arguments` for Textcortex operations, there is a risk that this data could be logged, stored, or processed by Textcortex in an insecure manner. Furthermore, if Textcortex generates output containing sensitive information, this data could be returned to the LLM and potentially exposed or stored without proper controls. The `memory` parameter also presents a potential vector if it can be used to store and retrieve arbitrary data without proper access controls. Implement data handling policies to prevent sensitive information from being passed to Textcortex. Ensure that Textcortex's data retention and security policies are understood and adhered to. Sanitize or redact sensitive information from Textcortex outputs before further processing or storage. Implement strict access controls and encryption for any data stored in the `memory` parameter. | LLM | SKILL.md:48 |
Scan History
Embed Code
[](https://skillshield.io/report/e87c1c38433f966c)
Powered by SkillShield