Trust Assessment
scanpy received a trust score of 63/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 2 high, 1 medium, and 1 low severity. Key findings include Network egress to untrusted endpoints, Covert behavior / concealment directives, Arbitrary File Read/Write via Script Arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 11, 2026 (commit 458b1186). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Arbitrary File Read/Write via Script Arguments The `scripts/qc_analysis.py` script is explicitly instructed to be executed with command-line arguments for input and output file paths. If an untrusted prompt can manipulate these arguments (e.g., by replacing `input_file.h5ad` with `/etc/passwd` or `filtered.h5ad` with `/root/.ssh/authorized_keys`), it could lead to the script attempting to read arbitrary files from the filesystem (data exfiltration) or write to arbitrary locations (arbitrary file write). The `scanpy.read_*` functions will attempt to open and parse the specified input file, and `adata.write()` will attempt to write to the specified output file. Implement strict input validation and sandboxing for file paths provided to command-line arguments. Ensure that file operations are restricted to a designated, non-sensitive data directory. The skill should explicitly instruct the LLM to only use paths within a secure sandbox. | LLM | SKILL.md:140 | |
| HIGH | Prompt Injection Leading to Arbitrary File Read/Write via Template Customization The `assets/analysis_template.py` script contains configurable file path variables (`INPUT_FILE`, `OUTPUT_DIR`). The skill explicitly instructs the LLM to copy and then 'edit parameters' of this template before execution. If an untrusted prompt can instruct the LLM to modify `INPUT_FILE` or `OUTPUT_DIR` to sensitive system paths (e.g., `/etc/passwd` for input, or `/root/.ssh/authorized_keys` for output), it could lead to data exfiltration (reading sensitive files) or arbitrary file writes when the modified script is executed. When customizing templates, the LLM should be strictly instructed to only use file paths within a designated, sandboxed data directory. Implement robust input validation and sandboxing for any file paths generated or modified by the LLM. | LLM | SKILL.md:262 | |
| MEDIUM | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | cli-tool/components/mcps/devtools/figma-dev-mode.json:4 | |
| LOW | Covert behavior / concealment directives Multiple zero-width characters (stealth text) Remove hidden instructions, zero-width characters, and bidirectional overrides. Skill instructions should be fully visible and transparent to users. | Manifest | cli-tool/components/mcps/devtools/jfrog.json:4 |
Scan History
Embed Code
[](https://skillshield.io/report/1a75276034855a19)
Powered by SkillShield