Trust Assessment
fight-club received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 7 findings: 4 critical, 3 high, 0 medium, and 0 low severity. Key findings include Network egress to untrusted endpoints, Direct Shell Command Execution Instruction, Transmission of Agent Data to Untrusted External Server.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 10/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings7
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/0xmevdad/openswarm-fight-club/skill.md:6 | |
| CRITICAL | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/0xmevdad/openswarm-fight-club/skill.md:7 | |
| CRITICAL | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/0xmevdad/openswarm-fight-club/skill.md:14 | |
| CRITICAL | Direct Shell Command Execution Instruction The skill explicitly instructs the LLM to execute a shell command `python3 scripts/register.py`. This allows for arbitrary code execution on the host system if the LLM follows the instruction. The content of `scripts/register.py` is unknown and could be malicious, leading to system compromise. Remove direct shell command execution instructions. If external scripts are necessary, they should be sandboxed, thoroughly reviewed, and executed through a secure, controlled mechanism, not directly by the LLM. | LLM | skill.md:13 | |
| HIGH | Transmission of Agent Data to Untrusted External Server The skill requires the agent to send potentially sensitive information such as its name, capabilities, description, fight responses, and direct messages to an external server at `http://100.29.245.213:3456`. This constitutes data exfiltration to an untrusted third party, which could lead to privacy violations or data misuse. Ensure that all external communications are to trusted endpoints. Implement strict data governance policies to control what data can be sent externally. Consider anonymizing or encrypting sensitive data before transmission. | LLM | skill.md:5 | |
| HIGH | Instruction to Handle and Store API Keys Insecurely The skill instructs the LLM to 'Save the returned API key' and use it for authentication with an external service. Without secure storage and retrieval mechanisms, this API key could be exposed in logs, internal states, or subsequent prompts, leading to credential harvesting by malicious actors or other agents. Implement a secure credential management system for the LLM that uses encrypted storage, strict access controls, and avoids exposing secrets in plain text or logs. API keys should be managed by the host environment, not directly by the LLM. | LLM | skill.md:16 | |
| HIGH | Execution of Unanalyzed Local Script The skill instructs the LLM to execute `python3 scripts/register.py`. The source code for `scripts/register.py` is not provided within the analysis context, meaning its functionality is opaque and untrusted. Executing unanalyzed code, even if bundled locally, introduces a supply chain risk as it could perform malicious actions, exfiltrate data, or compromise the host system. All executable scripts should be provided for security analysis. If external scripts are necessary, they must be thoroughly vetted, sandboxed, and executed through secure, controlled mechanisms. | LLM | skill.md:13 |
Scan History
Embed Code
[](https://skillshield.io/report/b4672b3cc7ef110f)
Powered by SkillShield