Trust Assessment
technical-blog-writing received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 6 findings: 3 critical, 1 high, 1 medium, and 1 low severity. Key findings include Covert behavior / concealment directives, Arbitrary command execution, Remote code execution: curl/wget pipe to shell.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary command execution Remote code download piped to interpreter Review all shell execution calls. Ensure commands are static (not built from user input), use absolute paths, and are strictly necessary. Prefer library APIs over shell commands. | Manifest | skills/okaris/technical-blog-writing/SKILL.md:9 | |
| CRITICAL | Remote code execution: curl/wget pipe to shell Detected a pattern that downloads and immediately executes remote code. This is a primary malware delivery vector. Never pipe curl/wget output directly to a shell interpreter. | Static | skills/okaris/technical-blog-writing/SKILL.md:9 | |
| CRITICAL | Arbitrary Python Code Execution via infsh/python-executor The skill's manifest grants broad `Bash(infsh *)` permissions, which allows the execution of any `infsh` command. The skill then explicitly demonstrates the use of `infsh app run infsh/python-executor` with a `code` argument. If the LLM constructs this `code` argument based on untrusted user input, it creates a critical command injection vulnerability, allowing an attacker to execute arbitrary Python code on the host system. This is a direct path to Remote Code Execution (RCE). 1. **Restrict Permissions**: Narrow the `Bash` permission in the manifest from `Bash(infsh *)` to only the specific `infsh` commands and arguments absolutely required. 2. **Remove or Sandbox `python-executor`**: If arbitrary Python execution is not a core, unavoidable feature, remove the `infsh/python-executor` tool from the skill's capabilities. If it is essential, ensure it runs in a strictly sandboxed environment with minimal privileges and network access. 3. **Input Validation**: Implement robust input validation and sanitization for any arguments passed to `infsh/python-executor` to prevent injection of malicious code. | LLM | SKILL.md:183 | |
| HIGH | Potential Server-Side Request Forgery (SSRF) via infsh/html-to-image The skill demonstrates using `infsh app run infsh/html-to-image` which accepts arbitrary HTML as input. If the underlying `html-to-image` tool uses a browser engine to render the HTML, an attacker could inject malicious HTML (e.g., `<img src="http://internal-resource/"/>` or `<iframe src="file:///etc/passwd">`) to perform Server-Side Request Forgery (SSRF). This could lead to data exfiltration from internal networks or local files, or port scanning. 1. **Input Sanitization**: Strictly sanitize the HTML input to `infsh/html-to-image`, removing all potentially dangerous tags (e.g., `<iframe>`, `<script>`, `<object>`, `<embed>`) and attributes (e.g., `src`, `href` that point to external or local resources). 2. **Network Isolation**: Run the `html-to-image` tool in a network-isolated environment that cannot access internal resources or sensitive files. 3. **Restrict Permissions**: Further narrow the `Bash` permission in the manifest to only allow specific arguments for `infsh/html-to-image` if possible. | LLM | SKILL.md:16 | |
| MEDIUM | Potential Prompt Injection or Command Injection via exa/search query The skill demonstrates using `infsh app run exa/search` with a `query` argument. If `exa/search` is an LLM-based tool, the `query` parameter is a direct vector for prompt injection, allowing an attacker to manipulate the underlying LLM's behavior. If `exa/search` processes the query in a way that involves shell commands or file system access, it could also lead to command injection or data exfiltration. The broad `Bash(infsh *)` permission exacerbates this risk. 1. **Input Sanitization**: Implement strict input validation and sanitization for the `query` argument to `exa/search` to prevent prompt injection attempts or malicious command sequences. 2. **Tool Hardening**: Ensure the `exa/search` tool itself is hardened against prompt injection, command injection, and data exfiltration, especially if it interacts with an LLM or executes system commands. 3. **Restrict Permissions**: Narrow the `Bash` permission in the manifest to only allow specific arguments for `exa/search` if possible. | LLM | SKILL.md:11 | |
| LOW | Covert behavior / concealment directives CSS-based text hiding Remove hidden instructions, zero-width characters, and bidirectional overrides. Skill instructions should be fully visible and transparent to users. | Manifest | skills/okaris/technical-blog-writing/SKILL.md:18 |
Scan History
Embed Code
[](https://skillshield.io/report/0e054c73a0e0ac9d)
Powered by SkillShield