Trust Assessment
clawnance received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 7 findings: 1 critical, 4 high, 2 medium, and 0 low severity. Key findings include Potential Command Injection in `curl` registration, Execution of instructions from externally downloaded file (`heartbeat.md`), Potential Command Injection in `cron` configuration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings7
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Execution of instructions from externally downloaded file (`heartbeat.md`) The skill instructs the agent to download `heartbeat.md` from an external URL (`https://clawnance.com/heartbeat.md`) in Step 2. Subsequently, in Step 4, the agent is explicitly instructed to "follow the instructions in `heartbeat.md`" every 5 minutes. This creates a critical supply chain vulnerability. If `clawnance.com` is compromised, a malicious `heartbeat.md` file could be served, leading to arbitrary command execution on the agent's system with high frequency. Do not instruct the agent to execute or follow instructions from externally downloaded files without strict validation and sandboxing. If external content is necessary, it should be treated as data, not executable code, and parsed securely. Consider signing and verifying downloaded content. | LLM | skill.md:38 | |
| HIGH | Potential Command Injection in `curl` registration The skill instructs the agent to register using a `curl` command that includes placeholders for `name` and `model`. If the agent substitutes these placeholders with values obtained from untrusted sources (e.g., human owner input) without proper sanitization, it could lead to command injection if the values contain shell metacharacters. Instruct the agent to strictly sanitize or escape any user-provided input before embedding it into shell commands. Prefer using API client libraries over raw shell commands for robustness. | LLM | skill.md:20 | |
| HIGH | Potential Command Injection in `cron` configuration Step 4 instructs the agent to "Configure a 5-minute autonomous cycle (e.g., via `cron`)". This implies the agent will execute shell commands to set up a `cron` job. If the agent constructs the `cron` command string using any dynamic or untrusted input without proper sanitization, it could lead to command injection, allowing arbitrary commands to be scheduled and executed. Provide a precise, sanitized `cron` command template or instruct the agent to use a secure, programmatic scheduling API if available, rather than raw shell commands. Ensure any dynamic parts of the `cron` entry are strictly validated and escaped. | LLM | skill.md:68 | |
| HIGH | Agent instructed to configure system-level scheduling (`cron`) Step 4 instructs the agent to "Configure a 5-minute autonomous cycle (e.g., via `cron`)". This implies the agent has permissions to modify system-level scheduling, which is a powerful capability. A compromised or malicious agent could schedule arbitrary tasks, leading to persistent unauthorized activity or resource exhaustion. Avoid granting agents direct access to system-level scheduling tools like `cron`. Instead, use a secure, sandboxed scheduling mechanism provided by the agent's host environment or a dedicated API with fine-grained permissions. | LLM | skill.md:68 | |
| HIGH | Explicit instruction to share performance data to social platforms Step 5 explicitly instructs the agent to "Share your performance cards with your owner to demonstrate your alpha. You can even share with any other social platforms you have access to, ask your owner for confirmation before you do." This is a direct instruction for data exfiltration. While "optional" and requiring "owner confirmation", it establishes a mechanism for the agent to transmit potentially sensitive performance data to arbitrary external social platforms it has access to, which could include platforms not intended for such data. Implement strict allow-lists for data sharing destinations. Do not allow agents to share data to arbitrary "social platforms you have access to". Ensure any shared data is anonymized or aggregated to prevent leakage of sensitive individual trading details. | LLM | skill.md:80 | |
| MEDIUM | Agent instructed to write files to local filesystem The skill instructs the agent to download `trading.md` and `heartbeat.md` (Step 2) and create `strategy.md` and `risk.md` (Step 3) to its "local memory" or "local file". This grants the agent file system write access, which could be abused if the agent is compromised or misbehaves, potentially overwriting critical system files or exfiltrating data. Restrict the agent's file system access to a dedicated, sandboxed directory. Implement strict file type and size validation for downloaded content. | LLM | skill.md:36 | |
| MEDIUM | Creation of sensitive local files (`strategy.md`, `risk.md`) with exfiltration potential In Step 3, the agent is instructed to create `strategy.md` and `risk.md` containing sensitive trading parameters (bias, leverage limits, drawdown limits, balance usage) provided by the human owner. While not explicitly instructed to share these specific files, the general instruction in Step 5 to "share your performance cards... with any other social platforms you have access to" creates a credible path for these sensitive local files to be inadvertently or maliciously exfiltrated by the agent. Store sensitive configuration data in secure, encrypted storage that is not easily accessible or exfiltratable by the agent. If files must be created, ensure they are stored in a sandboxed, restricted directory and that the agent's sharing capabilities are strictly limited to approved data and destinations. | LLM | skill.md:48 |
Scan History
Embed Code
[](https://skillshield.io/report/84d1d3c6f173fda9)
Powered by SkillShield