Trust Assessment
blockrun received a trust score of 55/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 1 critical, 2 high, 1 medium, and 1 low severity. Key findings include Network egress to untrusted endpoints, Covert behavior / concealment directives, Excessive Bash permissions allow arbitrary code execution.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 11, 2026 (commit 458b1186). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Excessive Bash permissions allow arbitrary code execution The skill declares broad Bash permissions (`Bash(python:*)`, `Bash(python3:*)`, `Bash(pip:*)`, `Bash(source:*)`) in its manifest. These permissions allow the agent to execute arbitrary Python code, install unverified packages, and run arbitrary shell scripts. This poses a critical security risk, as a malicious prompt could instruct the agent to execute harmful commands, exfiltrate data, or compromise the host system. Restrict Bash permissions to specific, necessary commands and arguments. Avoid wildcard `*` usage. For Python, consider using a sandboxed environment or specific function calls instead of arbitrary script execution. For `pip`, consider using a virtual environment and pinning package versions. | LLM | Manifest | |
| HIGH | Unscoped Read permission allows access to arbitrary files The skill declares a broad `Read` permission in its manifest without specifying any scope or allowed directories. This allows the agent to read any file accessible to its execution environment, including sensitive configuration files, user data, or the wallet session file (`$HOME/.blockrun/.session`) mentioned in the skill. This poses a significant data exfiltration risk if the agent is prompted to read and output sensitive file contents. Restrict `Read` permissions to specific, necessary directories or file patterns. Avoid granting unscoped `Read` access. | LLM | Manifest | |
| HIGH | Unpinned dependency in `pip install` instructions The skill explicitly instructs the agent to install the `blockrun-llm` package using `pip install blockrun-llm` and `pip install --upgrade blockrun-llm` without specifying a version. This creates a supply chain risk, as a malicious update to the `blockrun-llm` package could be automatically installed, leading to arbitrary code execution or other compromises, especially given the `Bash(pip:*)` permission. Pin the dependency to a specific version (e.g., `pip install blockrun-llm==1.2.3`) to ensure reproducible and secure installations. Regularly review and update pinned versions. | LLM | SKILL.md:150 | |
| MEDIUM | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | cli-tool/components/mcps/devtools/figma-dev-mode.json:4 | |
| LOW | Covert behavior / concealment directives Multiple zero-width characters (stealth text) Remove hidden instructions, zero-width characters, and bidirectional overrides. Skill instructions should be fully visible and transparent to users. | Manifest | cli-tool/components/mcps/devtools/jfrog.json:4 |
Scan History
Embed Code
[](https://skillshield.io/report/0e7f3f8752491844)
Powered by SkillShield