Security Audit
Docker Hub Automation
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
Docker Hub Automation received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Tool allows adding members to Docker Hub organizations, Tool allows creating webhooks to arbitrary URLs, posing data exfiltration risk.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Tool allows adding members to Docker Hub organizations The `DOCKER_HUB_ADD_ORG_MEMBER` tool can be used to invite any user (identified by Docker ID or email) to a Docker Hub organization. An attacker could craft a prompt to trick the host LLM into executing this tool with a malicious actor's credentials, potentially granting them unauthorized access to the user's organization, repositories, and other resources. Although the tool description states it 'Requires owner or admin permissions', its exposure via the skill makes it a high-risk capability if the LLM is compromised. Implement strict user confirmation or approval workflows before executing `DOCKER_HUB_ADD_ORG_MEMBER`. Consider restricting this tool's availability or requiring explicit human approval for sensitive actions. Ensure the host LLM is robustly protected against prompt injection to prevent misuse. | LLM | SKILL.md:80 | |
| HIGH | Tool allows creating webhooks to arbitrary URLs, posing data exfiltration risk The `DOCKER_HUB_CREATE_WEBHOOK` tool can be used to set up webhooks on repositories, which send notifications (e.g., on image pushes) to a specified URL. An attacker could manipulate the host LLM via prompt injection to create a webhook pointing to an attacker-controlled server. This would allow the attacker to exfiltrate metadata about image pushes (e.g., repository name, image tags, push times) from the user's Docker Hub repositories. Although the tool description states it 'Requires admin permissions on the repository', the ability to define an arbitrary URL for notifications presents a clear data exfiltration vector. Implement strict validation and user confirmation for webhook URLs, especially for external domains. Consider whitelisting allowed webhook domains or requiring explicit human approval for new webhook creations. Ensure the host LLM is robustly protected against prompt injection to prevent misuse. | LLM | SKILL.md:85 |
Scan History
Embed Code
[](https://skillshield.io/report/6e87c284f119437e)
Powered by SkillShield