Trust Assessment
MarketPulse received a trust score of 76/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection in `curl` arguments, Potential Command Injection in `python3` script arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection in `curl` arguments The skill documentation demonstrates constructing `curl` commands where various parameters (`ticker`, `interval`, `start_date`, `end_date`, `limit`, `period`, `bank`) are directly interpolated into the URL. If these parameters are derived from untrusted user input and not properly sanitized or shell-escaped before being passed to `curl` in a shell environment, an attacker could inject arbitrary shell commands. For example, a malicious `ticker` value like `AAPL$(malicious_command)` could lead to arbitrary code execution. When generating `curl` commands based on user input, ensure all user-provided parameters are strictly validated (e.g., against a whitelist of allowed characters/formats) and/or properly shell-escaped before being interpolated into the command string. Consider using a `curl` library in a safer execution environment (e.g., Python `requests`) instead of direct shell execution if possible. | LLM | SKILL.md:47 | |
| HIGH | Potential Command Injection in `python3` script arguments The skill documentation provides examples of executing a local Python script (`{baseDir}/scripts/market_client.py`) with various arguments (`--ticker`, `--start`, `--end`, `--type`, `--count`, `--pe-max`, `--growth-min`, etc.). If these arguments are derived from untrusted user input and not properly sanitized or shell-escaped before being passed to the `python3` command, an attacker could inject arbitrary shell commands. For example, a malicious `ticker` value like `AAPL --evil-arg "value"; rm -rf /` could lead to arbitrary code execution. When generating `python3` commands based on user input, ensure all user-provided arguments are strictly validated and/or properly shell-escaped. Ideally, use a programmatic interface (e.g., Python's `subprocess.run` with `shell=False` and arguments passed as a list) rather than constructing a shell string, to prevent shell injection. | LLM | SKILL.md:189 |
Scan History
Embed Code
[](https://skillshield.io/report/18154c68c9bd405a)
Powered by SkillShield