Trust Assessment
tavily received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 2 critical, 0 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via External Content Display.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via External Content Display The skill retrieves content from an external API (Tavily) and prints it directly to standard output. This content, if crafted maliciously, could contain instructions or data designed to manipulate the host LLM's behavior, leading to prompt injection. This affects both the search results and extracted content. Implement robust output sanitization or clear demarcation of external content to prevent the LLM from interpreting it as new instructions. Consider using structured output formats (e.g., JSON) that the LLM can parse explicitly, rather than free-form text, or adding explicit 'tool output' tags around the content. | LLM | scripts/search.mjs:64 | |
| CRITICAL | Prompt Injection via External Content Display The skill retrieves content from an external API (Tavily) and prints it directly to standard output. This content, if crafted maliciously, could contain instructions or data designed to manipulate the host LLM's behavior, leading to prompt injection. This affects both the search results and extracted content. Implement robust output sanitization or clear demarcation of external content to prevent the LLM from interpreting it as new instructions. Consider using structured output formats (e.g., JSON) that the LLM can parse explicitly, rather than free-form text, or adding explicit 'tool output' tags around the content. | LLM | scripts/extract.mjs:45 |
Scan History
Embed Code
[](https://skillshield.io/report/48ecb5f0ec7ef731)
Powered by SkillShield