Security Audit
scientific-schematics
github.com/davila7/claude-code-templatesTrust Assessment
scientific-schematics received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 8 findings: 2 critical, 3 high, 2 medium, and 1 low severity. Key findings include Arbitrary command execution, Dangerous tool allowed: Bash, Dangerous call: subprocess.run().
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Static Code Analysis layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 458b1186). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings8
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | cli-tool/components/skills/scientific/scientific-schematics/scripts/generate_schematic.py:130 | |
| CRITICAL | Arbitrary File Write via Path Traversal in Output Path The `scripts/generate_schematic.py` script accepts a user-controlled `--output` argument, which is then passed directly to `scripts/generate_schematic_ai.py` as `output_path`. Inside `generate_schematic_ai.py`, this `output_path` is used without sanitization in `open(output_path, 'wb')` to write the generated image and to derive the path for `review_log.json` (`open(review_log_path, 'w')`). An attacker can exploit this by providing a path traversal sequence (e.g., `../../../../etc/passwd`) in the `--output` argument, leading to arbitrary file writes to any location on the filesystem where the agent has write permissions. This can be used for data exfiltration, privilege escalation, or system compromise. Implement robust path sanitization for the `output_path` argument. Ensure that the path is confined to an expected output directory and does not contain path traversal sequences (e.g., `..`, `/`). A common approach is to resolve the path to an absolute path and then verify it starts with an allowed base directory. | Static | scripts/generate_schematic_ai.py:260 | |
| HIGH | Dangerous tool allowed: Bash The skill allows the 'Bash' tool without constraints. This grants arbitrary command execution. Remove unconstrained shell/exec tools from allowed-tools, or add specific command constraints. | Static | cli-tool/components/skills/scientific/scientific-schematics/SKILL.md:1 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'main'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | cli-tool/components/skills/scientific/scientific-schematics/scripts/generate_schematic.py:130 | |
| HIGH | Direct User Input to LLM Prompt The skill directly incorporates the user-provided `prompt` argument (from `scripts/generate_schematic.py`) into the messages sent to external Large Language Models (Nano Banana Pro AI and Gemini 3 Pro) via the OpenRouter API in `scripts/generate_schematic_ai.py`. While the skill's intended function is to process natural language descriptions, this direct injection of untrusted user input into the LLM prompt creates a prompt injection vulnerability. A malicious user could craft a prompt to attempt to manipulate the LLMs, potentially leading to unintended content generation (e.g., harmful images), disclosure of internal system prompts or instructions (e.g., `SCIENTIFIC_DIAGRAM_GUIDELINES`), or bypassing safety filters. Implement prompt sanitization or validation to filter out known prompt injection patterns. Consider using a separate, hardened LLM to pre-process or validate user prompts before sending them to the primary generation/review models. Clearly document the limitations and potential risks of prompt injection to users. | LLM | scripts/generate_schematic_ai.py:203 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | cli-tool/components/skills/scientific/scientific-schematics/scripts/generate_schematic_ai.py:31 | |
| MEDIUM | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | cli-tool/components/mcps/devtools/figma-dev-mode.json:4 | |
| LOW | Covert behavior / concealment directives Multiple zero-width characters (stealth text) Remove hidden instructions, zero-width characters, and bidirectional overrides. Skill instructions should be fully visible and transparent to users. | Manifest | cli-tool/components/mcps/devtools/jfrog.json:4 |
Scan History
Embed Code
[](https://skillshield.io/report/ab3f1334cb9f5a37)
Powered by SkillShield