Trust Assessment
surfline received a trust score of 83/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 3 findings: 0 critical, 0 high, 3 medium, and 0 low severity. Key findings include Suspicious import: urllib.request, Prompt Injection via direct command-line argument output, Prompt Injection via API-derived and user-configured text output.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/miguelcarranza/surfline/scripts/surfline_client.py:13 | |
| MEDIUM | Prompt Injection via direct command-line argument output The `spotId` argument, taken directly from command-line input (`sys.argv[1]`), is printed directly to standard output without sanitization for LLM interpretation. An attacker or a malicious LLM could inject prompt instructions into this argument, which would then be presented to the host LLM, potentially leading to unintended actions or information disclosure. Sanitize all output that might be processed by an LLM. For `spot_id`, consider explicitly encoding or filtering characters that could be interpreted as prompt instructions (e.g., newlines, markdown characters) before printing. Alternatively, ensure the LLM is instructed to treat tool output as literal text. | LLM | scripts/surfline_report.py:60 | |
| MEDIUM | Prompt Injection via API-derived and user-configured text output Text fields derived from external API responses (e.g., `headline`, `name`, `url`, `spot_id`) or user-controlled configuration (`favorites.json`) are printed directly to standard output without sanitization for LLM interpretation. If these external sources are compromised or manipulated, they could contain prompt injection payloads. When the host LLM processes this output, it could be manipulated into performing unintended actions. This vulnerability is present in `scripts/surfline_report.py` (headline), `scripts/surfline_favorites.py` (name, headline from config/API), and `scripts/surfline_search.py` (name, spot_id, url from API). Sanitize all output that might be processed by an LLM. Specifically, `headline`, `name`, `url`, and `spot_id` (when derived from API or config) should be filtered or encoded to prevent prompt injection. This could involve stripping control characters, newlines, or markdown formatting, or explicitly instructing the LLM to treat tool output as literal text. | LLM | scripts/surfline_report.py:60 |
Scan History
Embed Code
[](https://skillshield.io/report/d4497a93c09f21a9)
Powered by SkillShield