Security Audit
ailabs-393/ai-labs-claude-skills:packages/skills/startup-validator
github.com/ailabs-393/ai-labs-claude-skillsTrust Assessment
ailabs-393/ai-labs-claude-skills:packages/skills/startup-validator received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include LLM instructed to execute shell command, leading to potential command injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on March 14, 2026 (commit 1a12bc7a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | LLM instructed to execute shell command, leading to potential command injection The `SKILL.md` file, which is treated as untrusted content, explicitly instructs the LLM to execute a shell command: `python scripts/market_analyzer.py analysis_data.json`. The LLM is also instructed to 'create a JSON file' which is then passed as an argument to this command. If a malicious user can influence the LLM's interpretation of these instructions through prompt injection, they could potentially inject arbitrary commands into the shell execution context. This could happen by manipulating the filename, the script name, or by injecting additional commands into the shell string that the LLM constructs, leading to arbitrary code execution on the host system. Do not instruct the LLM to directly construct and execute shell commands. Instead, expose the `market_analyzer.py` functionality as a dedicated tool with a well-defined and validated input schema. The tool's implementation should then safely invoke the Python script with sanitized, structured arguments, preventing any direct shell command injection by the LLM or user. For example, create a tool function `run_market_analyzer(market_data: dict, business_data: dict)` that internally handles the file creation and script execution securely. | LLM | SKILL.md:100 |
Scan History
Embed Code
[](https://skillshield.io/report/0a5e82a20e8759ca)
Powered by SkillShield