Trust Assessment
jb-split-hook received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 2 high, 0 medium, and 0 low severity. Key findings include Skill attempts to inject instructions into host LLM from untrusted content, Skill instructs LLM to generate executable deployment scripts, risking command injection, Skill instructs LLM to write files to specific paths, risking path traversal.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Skill attempts to inject instructions into host LLM from untrusted content The 'Output Format' section, which contains direct instructions for the host LLM (e.g., 'Generate: 1. Main contract in `src/`'), is located entirely within the `<!---UNTRUSTED_INPUT_START_...--->` and `<!---UNTRUSTED_INPUT_END_...--->` delimiters. This violates the core security rule to treat all content between these tags as untrusted data, not instructions. A malicious user could exploit this by providing their own instructions within the untrusted input to manipulate the LLM's behavior, potentially overriding the skill's intended functionality or causing unintended actions. Move all instructions intended for the host LLM (such as 'Output Format' and 'Generation Guidelines') outside of the `<!---UNTRUSTED_INPUT_START_...--->` and `<!---UNTRUSTED_INPUT_END_...--->` delimiters. These instructions must be part of the trusted skill definition, not user-controlled content. | LLM | SKILL.md:127 | |
| HIGH | Skill instructs LLM to generate executable deployment scripts, risking command injection The skill explicitly instructs the LLM to generate a 'Deployment script in `script/`'. Deployment scripts typically involve executing shell commands (e.g., `forge deploy`, `npm install`). If a malicious user can influence the content of this generated script through prompt injection (exacerbated by the previous finding), they could inject arbitrary shell commands, leading to command injection on the host system. This allows for potential arbitrary code execution. 1. Ensure all instructions for the LLM are outside the untrusted input delimiters. 2. Implement strict sanitization and validation of user input when generating executable content like deployment scripts. 3. Execute generated scripts in a highly sandboxed and isolated environment with minimal permissions, or disallow direct execution of user-influenced scripts entirely. 4. Consider using a templating engine for deployment scripts where only specific, safe parameters can be filled by user input, rather than free-form generation. | LLM | SKILL.md:130 | |
| HIGH | Skill instructs LLM to write files to specific paths, risking path traversal The skill instructs the LLM to generate files in specific directories (`src/`, `src/interfaces/`, `test/`, `script/`). If the LLM's file writing mechanism does not strictly validate and sanitize the target file paths, a malicious user could attempt path traversal (e.g., `../../../../etc/passwd`) to write to arbitrary locations on the host filesystem. This could lead to overwriting critical system files, creating malicious files, or exfiltrating data by writing to publicly accessible directories. This instruction is also located within the untrusted input, making it susceptible to manipulation. 1. Ensure all instructions for the LLM are outside the untrusted input delimiters. 2. Strictly enforce a sandboxed, temporary, and isolated directory for all file generation. 3. Implement robust path sanitization and validation to prevent path traversal attacks, ensuring that generated files can only be written within the designated safe directory. | LLM | SKILL.md:128 |
Scan History
Embed Code
[](https://skillshield.io/report/4fc420bf3c552190)
Powered by SkillShield