Trust Assessment
last30days received a trust score of 13/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 5 findings: 3 critical, 1 high, 1 medium, and 0 low severity. Key findings include Unsafe environment variable passthrough, Credential harvesting, Suspicious import: urllib.request.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Credential harvesting Reading well-known credential environment variables Skills should only access environment variables they explicitly need. Bulk environment dumps (os.environ.copy, JSON.stringify(process.env)) are almost always malicious. Remove access to Keychain, GPG keys, and credential stores. | Manifest | skills/zats/last30days/scripts/lib/env.py:41 | |
| CRITICAL | Prompt Injection from Untrusted Skill Definition The `SKILL.md` file, which is explicitly marked as untrusted input, contains direct instructions to the host LLM (e.g., 'CRITICAL: Parse User Intent', 'IMPORTANT: Do NOT ask about target tool before research.'). This is an attempt by the untrusted skill definition to manipulate the LLM's behavior and processing flow, violating the principle that untrusted content should not issue commands to the host LLM. Remove all direct instructions to the host LLM from within the untrusted `SKILL.md` content. The LLM's behavior should be governed solely by its trusted system prompt, not by the skill's untrusted definition. | LLM | SKILL.md:21 | |
| CRITICAL | Command Injection via User-Controlled Arguments The `SKILL.md` file, designated as untrusted input, explicitly instructs the host LLM to execute a shell command: `python3 ./scripts/last30days.py "$ARGUMENTS" --emit=compact 2>&1`. The `$ARGUMENTS` variable is derived from user input. If the LLM directly substitutes user input into `$ARGUMENTS` without proper shell escaping or sanitization, a malicious user could inject arbitrary shell commands (e.g., `"; rm -rf /"`), leading to remote code execution on the host system. Avoid direct shell command execution with user-controlled input. If shell execution is unavoidable, ensure all user-provided arguments are rigorously sanitized and shell-escaped before being passed to the command. Ideally, use a safer mechanism like a dedicated tool call with structured parameters instead of raw shell commands. | LLM | SKILL.md:95 | |
| HIGH | Unsafe environment variable passthrough Access to well-known credential environment variables Minimize environment variable exposure. Only pass required, non-sensitive variables to MCP servers. Use dedicated secret management instead of environment passthrough. | Manifest | skills/zats/last30days/scripts/lib/env.py:41 | |
| MEDIUM | Suspicious import: urllib.request Import of 'urllib.request' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/zats/last30days/scripts/lib/http.py:8 |
Scan History
Embed Code
[](https://skillshield.io/report/14302f00c692f0ed)
Powered by SkillShield