Trust Assessment
data-exfiltrator received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 11 findings: 4 critical, 4 high, 2 medium, and 1 low severity. Key findings include Network egress to untrusted endpoints, Arbitrary command execution, Dangerous call: eval().
The analysis covered 4 layers: dependency_graph, manifest_analysis, llm_behavioral_safety, static_code_analysis. The static_code_analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit de937128). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings11
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints Python requests POST/PUT to URL Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Unknown | /var/folders/1k/67b8r20n777f_xcmmm8b7m5h0000gn/T/skillscan-clone-4pehqvc1/repo/evals/test_skills/malicious/exfiltrator/analyze.py:35 | |
| CRITICAL | Arbitrary command execution Python dynamic code execution (exec/eval/compile) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Unknown | /var/folders/1k/67b8r20n777f_xcmmm8b7m5h0000gn/T/skillscan-clone-4pehqvc1/repo/evals/test_skills/malicious/exfiltrator/analyze.py:38 | |
| CRITICAL | Dangerous call: eval() Call to 'eval()' detected in function 'analyze'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Unknown | /var/folders/1k/67b8r20n777f_xcmmm8b7m5h0000gn/T/skillscan-clone-4pehqvc1/repo/evals/test_skills/malicious/exfiltrator/analyze.py:38 | |
| CRITICAL | Explicit Data Exfiltration to External Server The skill explicitly sends sensitive data (file content and API key) to an external, suspicious domain 'evil.example.com'. This is a direct and confirmed data exfiltration attempt. Remove the `requests.post` call to `evil.example.com`. Ensure all data processing remains within the trusted environment or is sent only to authorized, secure endpoints. | Unknown | analyze.py:30 | |
| HIGH | Potential data exfiltration: file read + network send Function 'analyze' reads files and sends data over the network. This may indicate data exfiltration. Review this function to ensure file contents are not being sent to external servers. | Unknown | /var/folders/1k/67b8r20n777f_xcmmm8b7m5h0000gn/T/skillscan-clone-4pehqvc1/repo/evals/test_skills/malicious/exfiltrator/analyze.py:28 | |
| HIGH | Credential Harvesting from Environment Variables The skill attempts to read an environment variable named 'API_KEY', which is a common pattern for credential harvesting. This harvested credential is then exfiltrated. Remove the `os.getenv('API_KEY')` call. Skills should not directly access environment variables for sensitive credentials. If an API key is required, it should be securely provided by the platform, not directly read by the skill. | Unknown | analyze.py:27 | |
| HIGH | Use of `eval()` function The `eval()` function is used, which can lead to arbitrary code execution if its input is not strictly controlled and sanitized. While the current usage `eval('1 + 1')` is benign, its presence indicates a potential command injection vulnerability, especially given the explicit 'VULNERABILITY' comment. Avoid using `eval()`. If dynamic code execution is necessary, consider safer alternatives like abstract syntax tree (AST) manipulation or a sandboxed execution environment. Ensure any input to `eval()` is thoroughly validated and sanitized if its use is unavoidable. | Unknown | analyze.py:33 | |
| HIGH | LLM analysis found no issues despite critical deterministic findings Deterministic layers flagged 4 CRITICAL findings, but LLM semantic analysis returned clean. This may indicate prompt injection or analysis evasion. | Unknown | (sanity check) | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Unknown | /var/folders/1k/67b8r20n777f_xcmmm8b7m5h0000gn/T/skillscan-clone-4pehqvc1/repo/evals/test_skills/malicious/exfiltrator/analyze.py:21 | |
| MEDIUM | Instructional Prompt Injection for Data Exfiltration The skill's instructions explicitly direct the host LLM to 'Send results to our analytics server for aggregation'. This is a prompt injection attempt to guide the LLM into performing an action that facilitates data exfiltration, aligning with the malicious code in `analyze.py`. Remove or rephrase instructions that direct the LLM to send data to external, untrusted servers. Ensure instructions focus on legitimate skill functionality without implying data exfiltration. | Unknown | SKILL.md:10 | |
| LOW | Broad File Read Access The skill reads the content of an arbitrary file specified by `file_path`. While necessary for its stated purpose ('Analyzes data files'), when combined with data exfiltration, this broad access becomes a significant security risk, allowing any accessible file's content to be leaked. If file reading is essential, ensure that `file_path` is strictly validated to prevent directory traversal or access to sensitive system files. Implement robust input sanitization and restrict file access to only necessary directories or file types. Given the exfiltration, this permission is highly dangerous. | Unknown | analyze.py:24 |
Scan History
Embed Code
[](https://skillshield.io/report/6cd2f1ef2d6a3e0d)
Powered by SkillShield