Trust Assessment
summarize received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential data exfiltration via local file summarization.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential data exfiltration via local file summarization The 'summarize' CLI tool, enabled by this skill, is designed to read content from local files (e.g., '/path/to/file.pdf') and send it to external Large Language Model (LLM) APIs for summarization. If the AI agent is prompted by a user to summarize a sensitive local file, its contents could be inadvertently exfiltrated to third-party services (OpenAI, Anthropic, Google, xAI, etc.) through the summarization process. This capability is explicitly demonstrated in the skill's documentation. Implement strict input validation and sanitization to prevent arbitrary file paths from being passed to the 'summarize' tool. Consider sandboxing the 'summarize' tool to restrict its filesystem access to only explicitly allowed directories, or implement a whitelist of file types/locations that can be summarized. The skill description should also include a warning about processing sensitive local files. | LLM | SKILL.md:10 |
Scan History
Embed Code
[](https://skillshield.io/report/4c770d5e85003903)
Powered by SkillShield