Trust Assessment
biomni received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 16 findings: 8 critical, 5 high, 2 medium, and 1 low severity. Key findings include Persistence / self-modification instructions, Arbitrary command execution, Dangerous call: subprocess.run().
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 458b1186). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings16
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Persistence / self-modification instructions Shell RC file modification for persistence Remove any persistence mechanisms. Skills should not modify system startup configurations, crontabs, LaunchAgents, systemd services, or shell profiles. | Manifest | cli-tool/components/skills/scientific/biomni/scripts/setup_environment.py:155 | |
| CRITICAL | Persistence / self-modification instructions Shell RC file modification for persistence Remove any persistence mechanisms. Skills should not modify system startup configurations, crontabs, LaunchAgents, systemd services, or shell profiles. | Manifest | cli-tool/components/skills/scientific/biomni/scripts/setup_environment.py:299 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | cli-tool/components/skills/scientific/biomni/scripts/setup_environment.py:25 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | cli-tool/components/skills/scientific/biomni/scripts/setup_environment.py:47 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | cli-tool/components/skills/scientific/biomni/scripts/setup_environment.py:67 | |
| CRITICAL | Arbitrary command execution Python shell execution (os.system, subprocess) Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | cli-tool/components/skills/scientific/biomni/scripts/setup_environment.py:219 | |
| CRITICAL | Execution of LLM-generated code with full system privileges The skill explicitly states in its 'Security Considerations' section that 'Biomni executes LLM-generated code with full system privileges.' This means that any malicious instructions or code generated by the LLM (e.g., through a prompt injection attack) could lead to arbitrary code execution on the host system with the permissions of the running process. While the skill advises mitigation strategies like sandboxing, the core functionality itself presents a severe command injection and excessive permissions vulnerability. Implement robust sandboxing for all LLM-generated code execution. Restrict the execution environment's permissions to the absolute minimum required. Implement strict input/output sanitization and allowlisting for commands. For sensitive operations, require human review and approval before executing LLM-generated code. | Static | SKILL.md:195 | |
| CRITICAL | Unsanitized LLM output in HTML report generation leading to XSS The `scripts/generate_report.py` script's `markdown_to_html_simple` function performs a basic conversion of markdown to HTML. This function does not appear to sanitize or escape HTML tags present in the input markdown content, which is derived from LLM-generated responses. If an LLM response contains malicious HTML or JavaScript (e.g., `<script>alert(document.cookie)</script>`), this content would be directly embedded into the generated HTML report. Viewing such a report in a web browser could lead to Cross-Site Scripting (XSS) attacks, potentially allowing data exfiltration (e.g., cookies, local storage) or arbitrary client-side script execution. Use a robust, security-hardened markdown-to-HTML library (e.g., `markdown` with `bleach` for sanitization) that properly escapes or sanitizes all HTML content from untrusted sources. Alternatively, implement explicit HTML escaping for all text content before rendering it into the HTML structure within the `markdown_to_html_simple` function. | Static | scripts/generate_report.py:179 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'setup_conda_environment'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | cli-tool/components/skills/scientific/biomni/scripts/setup_environment.py:47 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'check_conda_installed'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | cli-tool/components/skills/scientific/biomni/scripts/setup_environment.py:25 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'setup_conda_environment'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | cli-tool/components/skills/scientific/biomni/scripts/setup_environment.py:67 | |
| HIGH | Dangerous call: subprocess.run() Call to 'subprocess.run()' detected in function 'test_installation'. This can execute arbitrary code. Avoid using dangerous functions like exec/eval/os.system. Use safer alternatives. | Static | cli-tool/components/skills/scientific/biomni/scripts/setup_environment.py:219 | |
| HIGH | LLM analysis found no issues despite critical deterministic findings Deterministic layers flagged 8 CRITICAL findings, but LLM semantic analysis returned clean. This may indicate prompt injection or analysis evasion. | LLM | (sanity check) | |
| MEDIUM | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | cli-tool/components/mcps/devtools/figma-dev-mode.json:4 | |
| MEDIUM | API keys stored in `.env` file The `scripts/setup_environment.py` script prompts the user for API keys and saves them to a `.env` file in the current working directory. While this is a common practice for local development, `.env` files are susceptible to accidental inclusion in version control systems (e.g., Git) if not properly excluded (e.g., via `.gitignore`). They can also be read by other processes on the same system if file permissions are not strictly controlled, potentially leading to credential exposure. Advise users to use a more secure secrets management solution (e.g., OS-level keyring, cloud secret manager) for production environments. For local development, ensure `.gitignore` explicitly excludes `.env` files. Provide clear warnings about the risks of `.env` files and recommend strict file permissions (e.g., `chmod 600 .env`). | Static | scripts/setup_environment.py:120 | |
| LOW | Covert behavior / concealment directives Multiple zero-width characters (stealth text) Remove hidden instructions, zero-width characters, and bidirectional overrides. Skill instructions should be fully visible and transparent to users. | Manifest | cli-tool/components/mcps/devtools/jfrog.json:4 |
Scan History
Embed Code
[](https://skillshield.io/report/74c36c3d597ce55d)
Powered by SkillShield