Trust Assessment
blockrun received a trust score of 43/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 5 findings: 1 critical, 2 high, 2 medium, and 0 low severity. Key findings include Covert behavior / concealment directives, Sensitive environment variable access: $HOME, Arbitrary Code Execution via Broad Bash Permissions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary Code Execution via Broad Bash Permissions The skill declares highly permissive `Bash(python:*)`, `Bash(python3:*)`, and `Bash(source:*)` permissions. These allow the agent to execute arbitrary Python code or source arbitrary shell scripts. This is a critical command injection vulnerability, enabling a malicious prompt to execute any command on the host system, leading to data exfiltration, system compromise, or denial of service. Drastically reduce the scope of Bash permissions. Instead of `Bash(python:*)`, consider specific, whitelisted Python scripts or functions. Avoid `Bash(source:*)` entirely unless absolutely necessary and with strict input validation. Implement a sandboxed execution environment for any code execution. | LLM | SKILL.md | |
| HIGH | Covert behavior / concealment directives Directive to hide behavior from user Remove hidden instructions, zero-width characters, and bidirectional overrides. Skill instructions should be fully visible and transparent to users. | Manifest | skills/blockrun/SKILL.md:50 | |
| HIGH | Supply Chain Risk from Unpinned Dependency and Broad Pip Permission The skill recommends installing `blockrun-llm` without specifying a version (`pip install blockrun-llm`). This unpinned dependency, combined with the `Bash(pip:*)` permission, creates a significant supply chain risk. A malicious update to the `blockrun-llm` package could be automatically installed by the agent, leading to arbitrary code execution or other compromises. The `Bash(pip:*)` permission itself allows the agent to install any package from PyPI. Pin all Python dependencies to specific, known-good versions (e.g., `pip install blockrun-llm==1.2.3`). Regularly audit dependencies for vulnerabilities. Restrict `Bash(pip:*)` to only allow installation of whitelisted packages or from trusted, internal package indexes. | LLM | SKILL.md:200 | |
| MEDIUM | Sensitive environment variable access: $HOME Access to sensitive environment variable '$HOME' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/blockrun/SKILL.md:253 | |
| MEDIUM | Sensitive Wallet Path Disclosure Enabling Data Exfiltration The skill explicitly discloses the location of the agent's wallet session file (`$HOME/.blockrun/.session`). While the skill itself doesn't exfiltrate this data, this information, combined with the `Bash(python:*)` permission (which grants arbitrary file system read access), provides a clear path for a malicious prompt to read and exfiltrate sensitive wallet data. Avoid disclosing sensitive file paths in skill documentation. Ensure that access to sensitive files is strictly controlled by the underlying library and not directly exposed or made accessible via broad system permissions. The primary remediation is to address the overly broad `Bash(python:*)` permission. | LLM | SKILL.md:230 |
Scan History
Embed Code
[](https://skillshield.io/report/9f00c425cdb1fdf0)
Powered by SkillShield