Trust Assessment
token-layer received a trust score of 88/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 3 findings: 0 critical, 0 high, 2 medium, and 1 low severity. Key findings include Potential Command Injection via External Binaries, Server-Side Request Forgery (SSRF) Risk via Image URL, Sensitive Data Storage in Local Files.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Potential Command Injection via External Binaries The skill manifest requires the `jq` and `curl` binaries. LLM agents often construct shell commands dynamically using user-provided input. If the agent's implementation does not properly sanitize or escape untrusted input before interpolating it into shell commands for `curl` or `jq` execution, it could lead to command injection. An attacker could craft malicious input (e.g., in token names, descriptions, or other parameters) to execute arbitrary commands on the host system. The agent's implementation must rigorously sanitize and escape all user-provided input before constructing and executing shell commands involving `curl`, `jq`, or any other external binaries. Consider using libraries that handle command execution safely without direct shell interpolation. | LLM | SKILL.md:10 | |
| MEDIUM | Server-Side Request Forgery (SSRF) Risk via Image URL The `create-token-transaction` endpoint accepts an `image` parameter which can be a URL. If the `tokenlayer.network` backend fetches this URL without proper validation and network access controls, an attacker could instruct the agent to provide a malicious URL (e.g., pointing to internal network resources like `http://localhost/admin` or `file:///etc/passwd`). This could lead to Server-Side Request Forgery (SSRF), allowing the attacker to access or manipulate internal systems or sensitive files on the `tokenlayer.network` server. The `tokenlayer.network` API should implement strict validation and sanitization for all URLs provided to the `image` parameter. This includes blocking internal IP addresses, private networks, and non-HTTP/HTTPS schemes (e.g., `file://`). The agent should also be cautious about passing arbitrary URLs from untrusted user input to this endpoint. | LLM | SKILL.md:129 | |
| LOW | Sensitive Data Storage in Local Files Rule 7 instructs the agent to save a 'note (e.g., `memory/token-layer.json` or TOOLS.md) with the account email/user_id' after entering a referral code. Storing Personally Identifiable Information (PII) such as email addresses or user IDs in local agent memory or files creates a risk. If the agent's local storage environment is compromised, this sensitive data could be exposed or exfiltrated. Avoid storing PII locally if possible. If local storage is necessary, ensure the data is encrypted at rest and access is strictly controlled. Consider using ephemeral storage or relying on the API for user identification rather than persisting sensitive details locally. | LLM | SKILL.md:39 |
Scan History
Embed Code
[](https://skillshield.io/report/939c8ca27d798b30)
Powered by SkillShield