Trust Assessment
jadwal-sholat received a trust score of 79/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Suspicious import: urllib.request, Shell Command Injection via User Input in SKILL.md Examples.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Shell Command Injection via User Input in SKILL.md Examples The `SKILL.md` provides `bash` command examples that take a keyword as an argument (e.g., `python3 scripts/myquran_sholat.py cari "tangerang"`). If an LLM constructs these commands by directly embedding unsanitized user input into the `keyword` argument, a malicious user could inject arbitrary shell commands. For example, if the user provides `foo"; rm -rf / #` as the keyword, the resulting command would be `python3 scripts/myquran_sholat.py cari "foo"; rm -rf / #"`, leading to the execution of `rm -rf /`. While the Python script itself uses `argparse` and `urllib.parse.quote` to protect against URL injection *within the script's logic*, the initial shell invocation is vulnerable if the LLM does not properly escape user input for the shell. The LLM orchestrating the skill execution must ensure that any user-provided input used in shell commands is properly escaped or quoted for the shell environment. For example, using `shlex.quote()` in Python or equivalent shell-escaping mechanisms before constructing the final command string. Alternatively, the skill could expose a more structured API (e.g., a Python function call) that bypasses direct shell execution for user-controlled parameters. | LLM | SKILL.md:13 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/banghasan/jadwal-sholat-banghasan/scripts/myquran_sholat.py:24 |
Scan History
Embed Code
[](https://skillshield.io/report/213379fd06c2e9d0)
Powered by SkillShield