Trust Assessment
matplotlib received a trust score of 62/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 2 high, 1 medium, and 1 low severity. Key findings include Network egress to untrusted endpoints, Covert behavior / concealment directives, Arbitrary File Write via Unsanitized Output Path.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 12, 2026 (commit 458b1186). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Arbitrary File Write via Unsanitized Output Path The script `scripts/plot_template.py` uses the value of the `--output` command-line argument directly as the filename for `fig.savefig()`. If an LLM constructs this argument based on untrusted user input, a malicious actor could specify an arbitrary file path (e.g., using path traversal like `../../../../etc/passwd` or an absolute path). This could lead to overwriting critical system files, creating files in arbitrary locations, or potentially enabling command injection if a system-critical script is overwritten with malicious content. Sanitize and validate the `--output` argument to prevent path traversal and restrict file writes to a designated, sandboxed directory. Ensure that the output path is within an allowed base directory and does not contain '..' or absolute path components that escape the intended output location. | LLM | scripts/plot_template.py:202 | |
| HIGH | Arbitrary File Write via Unsanitized Output Path The script `scripts/style_configurator.py` uses the value of the `--output` command-line argument directly as the filename for `save_style_file()`, which then uses `open(filename, 'w')`. If an LLM constructs this argument based on untrusted user input, a malicious actor could specify an arbitrary file path (e.g., using path traversal like `../../../../etc/passwd` or an absolute path). This could lead to overwriting critical system files, creating files in arbitrary locations, or potentially enabling command injection if a system-critical configuration file or script is overwritten with malicious content. Sanitize and validate the `--output` argument to prevent path traversal and restrict file writes to a designated, sandboxed directory. Ensure that the output path is within an allowed base directory and does not contain '..' or absolute path components that escape the intended output location. | LLM | scripts/style_configurator.py:200 | |
| MEDIUM | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | cli-tool/components/mcps/devtools/figma-dev-mode.json:4 | |
| LOW | Covert behavior / concealment directives Multiple zero-width characters (stealth text) Remove hidden instructions, zero-width characters, and bidirectional overrides. Skill instructions should be fully visible and transparent to users. | Manifest | cli-tool/components/mcps/devtools/jfrog.json:4 |
Scan History
Embed Code
[](https://skillshield.io/report/28c71139511766ee)
Powered by SkillShield