Trust Assessment
zerion-api received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Insecure API Key Collection via Chat, API Key Embedded in Inner LLM Prompt.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Insecure API Key Collection via Chat The skill instructs the LLM to ask the user for their Zerion API key directly in the chat interface. Providing sensitive credentials like API keys in plain text within a chat conversation is highly insecure. Chat histories are often logged, stored, and can be accessible to various parties (e.g., LLM provider, developers, administrators), leading to credential harvesting and compromise. While the skill advises against logging the key, the initial collection method itself creates a critical exposure vector. Never ask users to provide API keys or other sensitive credentials directly in the chat. Instead, guide users to configure the key securely in a dedicated environment variable, a secure secrets manager, or a secure input field (e.g., `type="password"` in an artifact) that does not log the input. The skill correctly suggests using `type="password"` for artifacts, but this contradicts the initial instruction for direct chat input. | LLM | SKILL.md:14 | |
| HIGH | API Key Embedded in Inner LLM Prompt The skill explicitly instructs to embed the Zerion API key directly into the prompt sent to the inner Claude LLM for authentication. For example: `content: `Use the Zerion API with key "${apiKey}" to get the portfolio...`. While this is a common pattern for LLM-based tools, it means the sensitive API key becomes part of the prompt data. If the inner LLM's prompts are logged, stored, or accessible for debugging, monitoring, or by the LLM provider, the API key could be exposed, leading to data exfiltration. Avoid embedding sensitive credentials directly into LLM prompts. Explore alternative secure authentication mechanisms for the MCP connector, such as passing the API key via a secure header or a dedicated, non-logged parameter that is handled by the MCP server directly, rather than being processed as part of the LLM's natural language input. If direct prompt injection is unavoidable, ensure that the LLM provider guarantees that prompts containing such sensitive data are not logged or stored in an accessible manner. | LLM | SKILL.md:130 |
Scan History
Embed Code
[](https://skillshield.io/report/334c1c1ffc03b258)
Powered by SkillShield