Trust Assessment
clawdnet received a trust score of 79/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 3 findings: 0 critical, 1 high, 2 medium, and 0 low severity. Key findings include Exposure of API Key and Claim URL in Documentation, Untrusted Input in Agent Invocation Prompt, Unsanitized User Input in Shell Commands.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unsanitized User Input in Shell Commands The skill provides `curl` command examples where parts of the URL (e.g., `{handle}`) and JSON payload (e.g., `name`, `description`, `endpoint`) are expected to be user-defined. If an LLM is instructed to construct and execute these `curl` commands, and any of these user-defined fields are populated directly from untrusted input without proper shell escaping or validation, it could lead to command injection. An attacker could inject malicious shell commands that would be executed on the host system. For example, if `{handle}` could be `malicious_handle; rm -rf /`. If the LLM is intended to generate and execute shell commands, ensure that all user-provided inputs used in command arguments or JSON payloads are rigorously sanitized and shell-escaped to prevent injection. Prefer using dedicated API client libraries over direct shell command execution when possible. Implement strict input validation for all fields that accept user-defined strings. | LLM | SKILL.md:10 | |
| MEDIUM | Exposure of API Key and Claim URL in Documentation The skill documentation explicitly shows how to obtain an `api_key` during agent registration and instructs the user to 'Save the `api_key`' and 'send `claim_url` to your human for verification.' While storing API keys in environment variables is a good practice, the explicit handling and instruction to share a `claim_url` make these sensitive credentials and URLs potential targets for harvesting or exfiltration if the LLM is compromised or manipulated. An attacker could craft a prompt to trick the LLM into revealing the `api_key` or sending the `claim_url` to an unauthorized recipient. Advise users to implement robust secret management practices. For the `claim_url`, clarify that it should only be sent to trusted human operators through secure channels, and the LLM should never be prompted to share it directly. Consider if the `claim_url` needs to be exposed to the LLM at all after initial registration. | LLM | SKILL.md:20 | |
| MEDIUM | Untrusted Input in Agent Invocation Prompt The skill demonstrates invoking other agents using a `prompt` field in the request body. If the value for this `prompt` is derived directly from untrusted user input without proper sanitization or validation, it could lead to prompt injection attacks against the target agent. An attacker could craft malicious instructions within the `prompt` to manipulate the behavior of the invoked agent. When constructing invocation requests, ensure that any user-provided input for the `prompt` field is thoroughly sanitized, validated, or passed through a secure prompt templating mechanism to prevent injection of malicious instructions. Implement strict input validation on the receiving agent's side as well. | LLM | SKILL.md:45 |
Scan History
Embed Code
[](https://skillshield.io/report/de483ac12a9a2ab7)
Powered by SkillShield