Trust Assessment
AIsaFinancialData received a trust score of 82/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 3 findings: 0 critical, 0 high, 3 medium, and 0 low severity. Key findings include Unsafe deserialization / dynamic eval, Suspicious import: urllib.request, Potential Command Injection via `curl` examples.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/aisadevco/aisa-financial-data-api/scripts/market_client.py:303 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/aisadevco/aisa-financial-data-api/scripts/market_client.py:30 | |
| MEDIUM | Potential Command Injection via `curl` examples The `SKILL.md` provides `curl` command examples that directly embed parameters like `ticker`, `start_date`, `end_date`, `interval`, etc., into the URL string. If an LLM agent constructs these `curl` commands by directly interpolating untrusted user input into the URL or header values without proper shell escaping, it could lead to command injection. An attacker could craft malicious input (e.g., `ticker=AAPL; rm -rf /`) that, when executed by the agent, performs arbitrary commands on the host system. While the `market_client.py` script handles its arguments safely, the `curl` examples demonstrate a pattern that is vulnerable if not handled carefully by the agent. LLM agents should always sanitize or strictly validate any user-provided input before incorporating it into shell commands. For `curl` commands, ensure all URL parameters and header values derived from untrusted input are properly URL-encoded and shell-escaped. Prefer using a dedicated HTTP client library (like the provided Python script) which handles parameter encoding safely, rather than constructing `curl` commands directly from user input. | LLM | SKILL.md:50 |
Scan History
Embed Code
[](https://skillshield.io/report/b6c54eb2f8d6e3fa)
Powered by SkillShield