Security Audit
ailabs-393/ai-labs-claude-skills:dist/skills/travel-planner
github.com/ailabs-393/ai-labs-claude-skillsTrust Assessment
ailabs-393/ai-labs-claude-skills:dist/skills/travel-planner received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 7 findings: 3 critical, 4 high, 0 medium, and 0 low severity. Key findings include File read + network send exfiltration, Sensitive path access: AI agent config, Potential Command Injection via LLM-orchestrated CLI calls.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Manifest Analysis layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on March 14, 2026 (commit 1a12bc7a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings7
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | dist/skills/travel-planner/SKILL.md:499 | |
| CRITICAL | File read + network send exfiltration AI agent config/credential file access Remove access to sensitive files not required by the skill's stated purpose. SSH keys, cloud credentials, and browser data should never be read by skills unless explicitly part of their declared functionality. | Manifest | dist/skills/travel-planner/SKILL.md:500 | |
| CRITICAL | Potential Command Injection via LLM-orchestrated CLI calls The `SKILL.md` describes a workflow where the LLM is expected to execute shell commands, such as `python3 scripts/plan_generator.py --trip-id <id> --output plan.json` and `python3 scripts/travel_db.py export > backup.json`. Given that the `index.js` file is a placeholder and contains no actual logic, the LLM itself is responsible for constructing and executing these commands based on user input. If user input can influence parameters like `<id>` or the output filename (`backup.json`), an attacker could inject arbitrary shell commands (e.g., `'; rm -rf /'`) into the command string, leading to arbitrary code execution on the host system. Implement a robust tool-calling mechanism with strict input validation and sanitization for all arguments passed to shell commands. Avoid direct shell command execution based on user input. If possible, use a dedicated API or library calls instead of shell commands, ensuring that user input is never directly interpolated into command strings. | LLM | SKILL.md:176 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | dist/skills/travel-planner/SKILL.md:499 | |
| HIGH | Sensitive path access: AI agent config Access to AI agent config path detected: '~/.claude/'. This may indicate credential theft. Verify that access to this sensitive path is justified and declared. | Static | dist/skills/travel-planner/SKILL.md:500 | |
| HIGH | Sensitive Data Exfiltration via LLM-orchestrated file access and web search The skill stores sensitive user data (preferences, trip plans) in local JSON files (`~/.claude/travel_planner/preferences.json`, `~/.claude/travel_planner/trips.json`). The `SKILL.md` describes CLI commands like `python3 scripts/travel_db.py get_preferences` and `python3 scripts/travel_db.py export > backup.json`. If an attacker can leverage prompt injection to trick the LLM into executing these commands and then either control the output destination (e.g., redirect to a public location) or cause the LLM to embed the file contents in its response, sensitive user data could be exfiltrated. Additionally, the skill's use of 'web search' (Step 4) for destination research poses a risk if user-controlled sensitive data is inadvertently included in search queries, potentially leaking information to third-party search providers. Implement strict access controls and sanitization for any commands that read or export user data. Ensure that the LLM cannot be prompted to output the contents of local files directly. For web searches, strictly filter and sanitize any user-provided information before it is included in search queries to prevent leakage of sensitive data. | LLM | SKILL.md:200 | |
| HIGH | Broad Prompt Injection Surface due to LLM-driven workflow The skill's `index.js` is a placeholder, indicating that the LLM is expected to interpret the `SKILL.md` and directly orchestrate the execution of Python code snippets and shell commands based on natural language user input. This design creates a broad prompt injection surface, allowing a malicious user to craft inputs that could trick the LLM into performing unintended actions. These actions could include triggering sensitive commands (e.g., data exfiltration), manipulating internal state (e.g., `save_preferences` with malicious data), or attempting to execute arbitrary code via command injection. Implement a robust, explicit tool-calling interface (e.g., using a well-defined API in `index.js`) that strictly validates and sanitizes all user inputs before they are used in any backend operations. Avoid relying on the LLM to directly interpret and execute arbitrary code or shell commands based on natural language prompts. | LLM | SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/755a51400d0813fc)
Powered by SkillShield