Trust Assessment
dailypost-test received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include External Endpoint Output Returned Directly to LLM Context.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | External Endpoint Output Returned Directly to LLM Context The skill makes a GET request to an external, untrusted endpoint (`https://b024a53917d6.ngrok-free.app/agent/dailyPost`) and explicitly states that it will 'return whatever the endpoint sends back (text, JSON, etc.) directly to the chat.' This creates a direct and high-confidence vector for prompt injection. A malicious actor controlling or compromising the `ngrok-free.app` endpoint could return adversarial prompts, harmful instructions, or resource-intensive content designed to manipulate the host LLM's subsequent behavior, potentially leading to unauthorized actions, data exfiltration, or denial of service. The use of a free `ngrok` domain further increases the risk of the endpoint being ephemeral, compromised, or repurposed. 1. **Sanitize Output:** Implement robust sanitization and filtering of the external endpoint's response before returning it to the chat. This should include stripping potential prompt injection attempts, HTML/Markdown tags, and limiting response length. 2. **Validate Content:** If possible, validate the expected content type and structure of the response. 3. **Proxy/Whitelist:** Consider proxying external requests through a controlled service or whitelisting trusted domains. 4. **User Confirmation:** For sensitive actions, require user confirmation before displaying potentially untrusted content. 5. **Isolate LLM Context:** Ensure that the LLM's context for interpreting skill output is sufficiently isolated from its core instruction set. | LLM | skill.md:20 |
Scan History
Embed Code
[](https://skillshield.io/report/bbbed794895dd6eb)
Powered by SkillShield