Trust Assessment
economic-calendar-fetcher received a trust score of 51/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 1 critical, 1 high, 3 medium, and 0 low severity. Key findings include Missing required field: name, Suspicious import: urllib.request, Potential Command Injection via Unsanitized User Input in Shell Command Construction.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection via Unsanitized User Input in Shell Command Construction The skill instructs the host LLM to construct and execute a `python3` command with arguments (`--from`, `--to`, `--api-key`, `--output`) that can be derived directly from user input. The skill description does not include explicit instructions for the LLM to sanitize or properly quote these user-provided arguments before constructing the shell command. This creates a critical command injection vulnerability where a malicious user could inject arbitrary shell commands by crafting specific input for date ranges, API keys, or the output file path. Instruct the LLM to strictly sanitize and quote all user-provided arguments (e.g., dates, API keys, output paths) before incorporating them into the shell command. For example, use `shlex.quote()` if the LLM has access to such a function, or explicitly instruct the LLM to wrap all user-derived string arguments in single quotes and escape any internal single quotes. Additionally, consider validating the format and content of user inputs (e.g., date formats, safe characters for filenames) before command construction. | LLM | SKILL.md:128 | |
| HIGH | Excessive File System Permissions via Unrestricted Output Path The skill allows the user to specify an arbitrary output file path via the `--output` argument for the `get_economic_calendar.py` script. The skill does not instruct the LLM to validate or restrict this path, which could allow a malicious user to attempt to write data to sensitive system locations (e.g., `/etc/passwd`, `/root/.ssh/authorized_keys`) or overwrite existing files. While the Python script itself uses standard file writing, the lack of path validation at the LLM command construction level poses a significant risk. Instruct the LLM to validate and restrict the `--output` file path. This could involve: 1) ensuring the path is within an allowed, sandboxed directory (e.g., a temporary directory or a user-specific output folder), 2) disallowing absolute paths or paths containing '..' to prevent directory traversal, and 3) confirming the file extension is appropriate (e.g., '.json', '.md'). | LLM | SKILL.md:149 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/veeramanikandanr48/economic-calendar-fetcher/SKILL.md:1 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/veeramanikandanr48/economic-calendar-fetcher/scripts/get_economic_calendar.py:13 | |
| MEDIUM | API Key Exposure via Command-Line Arguments The skill allows the FMP API key to be passed directly as a command-line argument using `--api-key YOUR_KEY`. While the script also supports reading from an environment variable (which is more secure), passing sensitive credentials directly on the command line can expose them to other users on the system via process listings (`ps aux`) or in shell history files. This increases the risk of credential compromise. Strongly recommend that the LLM prioritize using the `FMP_API_KEY` environment variable for providing the API key. If a user explicitly provides a key in chat, the LLM should be instructed to inform the user about the security implications of command-line exposure and encourage setting it as an environment variable instead. The `--api-key` argument should only be used as a last resort or in highly controlled environments. | LLM | SKILL.md:130 |
Scan History
Embed Code
[](https://skillshield.io/report/e348d0ba68550add)
Powered by SkillShield