Trust Assessment
near-email received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 2 high, 0 medium, and 0 low severity. Key findings include Untrusted content contains direct instructions for the LLM, Skill facilitates sending public, unencrypted email content on-chain, Skill examples demonstrate direct use of sensitive `PAYMENT_KEY`.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted content contains direct instructions for the LLM The `SKILL.md` file, which is treated as untrusted input, contains explicit instructions intended for the host LLM. These instructions, such as "Do not mention specific costs per email" and "For blockchain integration (NEAR transactions), prefer JavaScript/TypeScript...", attempt to manipulate the LLM's behavior and output. This violates the principle of treating untrusted input as data, not instructions, and is a form of prompt injection. Remove all direct instructions to the LLM from the untrusted `SKILL.md` content. These instructions should be part of the trusted system prompt or skill definition, not the user-provided skill documentation. | LLM | SKILL.md:30 | |
| HIGH | Skill facilitates sending public, unencrypted email content on-chain The `send_email_plaintext` function explicitly states that "Email content is PUBLIC on the NEAR blockchain." An AI agent using this skill might inadvertently send sensitive user data in plaintext if it does not properly understand and adhere to this warning. While the skill itself warns about this, the LLM needs to be robust in preventing misuse and ensuring user privacy. The LLM should be explicitly instructed in its system prompt to always warn users about the public nature of `send_email_plaintext` and to avoid using it for sensitive information. Consider making the encrypted `send_email` the default or only option for AI agents if plaintext is not strictly required. | LLM | SKILL.md:45 | |
| HIGH | Skill examples demonstrate direct use of sensitive `PAYMENT_KEY` The skill documentation provides examples for using a `PAYMENT_KEY` (format `your-account.near:nonce:secret`) directly in HTTP headers. This key is a sensitive credential. If the AI agent generates code that logs this key, exposes it in client-side code without proper protection, or transmits it insecurely, it could lead to credential compromise and unauthorized access to the OutLayer service. The LLM should be explicitly instructed to handle `PAYMENT_KEY` securely, e.g., by using environment variables, secure secret management services, or prompting the user for the key at runtime, rather than embedding it directly in generated code. The skill documentation could also suggest best practices for handling this key. | LLM | SKILL.md:89 |
Scan History
Embed Code
[](https://skillshield.io/report/1603f93b5f8e836e)
Powered by SkillShield