Security Audit
apipie-ai-automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
apipie-ai-automation received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 0 high, 1 medium, and 0 low severity. Key findings include Generic tool execution allows broad access.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Generic tool execution allows broad access The skill utilizes `RUBE_MULTI_EXECUTE_TOOL` and `RUBE_REMOTE_WORKBENCH` to dynamically execute any tool discovered via `RUBE_SEARCH_TOOLS` within the `apipie_ai` toolkit. This grants the AI agent broad access to all functionalities exposed by the `apipie_ai` toolkit without explicit restrictions or a sandbox mechanism defined within the skill itself. A malicious prompt could instruct the agent to execute sensitive or destructive tools if they exist in the toolkit, leading to excessive permissions. Implement granular access controls for specific tools within the `apipie_ai` toolkit, or introduce a whitelist/blacklist mechanism for tool slugs that the agent is allowed to execute via this skill. Consider sandboxing the execution environment for `RUBE_MULTI_EXECUTE_TOOL` or requiring explicit user confirmation for sensitive operations, especially for tools with potentially destructive capabilities. | LLM | SKILL.md:48 |
Scan History
Embed Code
[](https://skillshield.io/report/314ce33aadd3e856)
Powered by SkillShield