Trust Assessment
luma received a trust score of 64/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 2 high, 1 medium, and 1 low severity. Key findings include Suspicious import: urllib.request, Node lockfile missing, LLM instructed to write to arbitrary filesystem path.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | LLM instructed to write to arbitrary filesystem path The skill documentation explicitly instructs the host LLM to save fetched events to a specific file path: `~/clawd/memory/luma-events.json`. This is a direct instruction for the LLM to perform a file system write operation. If the LLM's execution environment allows arbitrary file writes based on skill instructions, an attacker could potentially manipulate this instruction (e.g., via prompt injection against the LLM or by modifying the skill definition) to write to sensitive locations or overwrite critical files. While `~/clawd/memory/` might be a designated safe zone, the instruction itself represents a capability that could be abused if the LLM's file system access is too broad or if the path can be influenced by untrusted input. The LLM should not directly interpret and execute file system write instructions from untrusted skill documentation. Instead, skills should expose explicit tools or functions for data persistence with controlled parameters and sandboxed environments. If file persistence is required, the LLM should use a dedicated, sandboxed storage mechanism provided by the platform, not a direct path instruction from the skill. | LLM | SKILL.md:94 | |
| HIGH | Server-Side Request Forgery (SSRF) via unvalidated URL parameter The `scripts/fetch_events.py` script constructs a URL using an f-string: `url = f"https://lu.ma/{city_slug}"`. The `city_slug` parameter is taken directly from command-line arguments without validation or sanitization. If the host LLM is prompted to execute this script with user-controlled input for `city_slug`, an attacker could inject arbitrary hostnames or IP addresses (e.g., `city_slug="example.com"` or `city_slug="127.0.0.1/admin"`) into the URL. This could lead to Server-Side Request Forgery (SSRF), allowing an attacker to probe internal networks, access metadata services, scan for open ports, or make requests to other external services. The skill documentation provides examples of valid city slugs, but the Python script itself does not enforce these constraints. Implement strict validation and sanitization for the `city_slug` parameter. The script should verify that `city_slug` matches a predefined list of allowed city slugs or adheres to a strict regex pattern that prevents injection of hostnames, IP addresses, or path traversals. Alternatively, the LLM orchestrator should ensure that only validated city slugs are passed to the skill. | LLM | scripts/fetch_events.py:15 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/regalstreak/luma/scripts/fetch_events.py:7 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/regalstreak/luma/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/ea95809ec63bf414)
Powered by SkillShield