Trust Assessment
context-compression received a trust score of 95/100, placing it in the Trusted category. This skill has passed all critical security checks and demonstrates strong security practices.
SkillShield's automated analysis identified 1 finding: 0 critical, 0 high, 1 medium, and 0 low severity. Key findings include Potential Data Exfiltration to External LLM Provider.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 15, 2026 (commit 3e75fabd). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Potential Data Exfiltration to External LLM Provider The `scripts/compression_evaluator.py` file outlines a probe-based evaluation system where an external LLM acts as a judge. The `_evaluate_response_with_llm_judge` method is designed to construct a prompt containing `probe.question`, `probe.ground_truth`, and the agent's `response`. The `ground_truth` is derived from the `conversation_history`, which can include sensitive details like file paths, error messages, and internal decisions. While the `_llm_judge_api_call` is currently stubbed with a `NotImplementedError`, the 'PRODUCTION NOTES' explicitly state: 'Production systems should implement actual API calls to GPT-5.2 or equivalent.' If implemented as intended, this would transmit potentially sensitive user data, internal file paths, and conversation details to a third-party LLM provider, leading to data exfiltration. 1. **Anonymize Data**: Before sending any data to external LLMs, ensure all personally identifiable information (PII), sensitive business data, or proprietary code snippets are removed or anonymized. 2. **On-Premise/Private LLM**: Consider using an LLM deployed on-premise or within a private cloud environment where data residency and security controls can be fully managed. 3. **Explicit User Consent**: Obtain explicit user consent before transmitting any data to third-party LLM providers for evaluation purposes. 4. **Data Minimization**: Only send the absolute minimum amount of information required for the evaluation. 5. **Security Review**: Conduct a thorough security review of the LLM integration, including data handling, access controls, and compliance with relevant regulations. | LLM | scripts/compression_evaluator.py:190 |
Scan History
Embed Code
[](https://skillshield.io/report/e804f1b9557d9e7a)
Powered by SkillShield