Trust Assessment
tweet-writer received a trust score of 90/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 2 findings: 0 critical, 0 high, 2 medium, and 0 low severity. Key findings include Potential Data Exfiltration via WebSearch with User Input, Potential Prompt Injection via WebSearch Results Analysis.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Potential Data Exfiltration via WebSearch with User Input The skill explicitly instructs the LLM to use `WebSearch` with user-provided input (e.g., 'niche/topic', 'key message/insight'). If a user provides sensitive or confidential information in these inputs, the LLM will use this information to construct public search queries. This could lead to the exfiltration of sensitive data to external search engines and their logs, as the search queries become publicly visible or logged by the search provider. Implement robust input sanitization and validation for all user-provided inputs used in `WebSearch` queries. Explicitly warn users against providing sensitive information. Consider redacting or anonymizing potentially sensitive terms before passing them to `WebSearch`. Ensure the LLM is instructed to filter or redact sensitive information from user input before constructing search queries. | LLM | SKILL.md:155 | |
| MEDIUM | Potential Prompt Injection via WebSearch Results Analysis The skill instructs the LLM to perform `WebSearch` using user-provided inputs and then to 'Analyze patterns' and 'Identify successful patterns' from the retrieved results. If a malicious actor crafts their input to lead to a compromised website in search results, or if a website found by `WebSearch` contains hidden instructions (e.g., 'ignore all previous instructions' or malicious code snippets) designed to manipulate the LLM's behavior, the LLM could be susceptible to prompt injection. The analysis phase could implicitly lead to the LLM being influenced by malicious content from external sources. Instruct the LLM to critically evaluate and sanitize information retrieved from `WebSearch` before incorporating it into its reasoning or output. Explicitly forbid the LLM from following instructions found within `WebSearch` results that contradict its primary task or system instructions. Implement strict output formatting to prevent arbitrary code execution or instruction following. Consider sandboxing the analysis of external content. | LLM | SKILL.md:29 |
Scan History
Embed Code
[](https://skillshield.io/report/64d5e7903af0b410)
Powered by SkillShield