Security Audit
kaggle-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
kaggle-automation received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 0 high, 1 medium, and 0 low severity. Key findings include Broad access to Kaggle operations via Rube MCP tools.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Broad access to Kaggle operations via Rube MCP tools The skill instructs the LLM to use `RUBE_MULTI_EXECUTE_TOOL` and `RUBE_REMOTE_WORKBENCH` to perform Kaggle operations. These tools, provided by the Rube MCP and Composio Kaggle toolkit, grant the LLM the ability to execute a wide range of actions on the user's Kaggle account. While this is the intended purpose of an automation skill, it represents a high level of privilege. Misinterpretation of user prompts or malicious prompts could lead to unintended or harmful actions on Kaggle, such as data manipulation, competition submissions, or profile changes. Implement strict input validation and user confirmation for sensitive Kaggle operations when using `RUBE_MULTI_EXECUTE_TOOL` or `RUBE_REMOTE_WORKBENCH`. If possible, configure granular permissions within the Rube MCP or Composio toolkit to limit the scope of actions an LLM can perform, or restrict the types of `tool_slug` values that can be executed without explicit human approval. | LLM | SKILL.md:50 |
Scan History
Embed Code
[](https://skillshield.io/report/5eea6016a3801907)
Powered by SkillShield