Security Audit
ailabs-393/ai-labs-claude-skills:dist/skills/startup-validator
github.com/ailabs-393/ai-labs-claude-skillsTrust Assessment
ailabs-393/ai-labs-claude-skills:dist/skills/startup-validator received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Direct Shell Command Execution Capability.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on March 14, 2026 (commit 1a12bc7a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Direct Shell Command Execution Capability The skill's instructions explicitly direct the LLM to execute a local Python script (`scripts/market_analyzer.py`) using a shell command (`python scripts/market_analyzer.py analysis_data.json`). This grants the LLM the ability to perform shell executions on the host system. While the provided Python script appears to safely process JSON input, the capability to execute arbitrary local scripts is a critical security risk. An attacker could potentially leverage prompt injection to manipulate the LLM into executing different, malicious commands, or exploit vulnerabilities in the execution environment. This also introduces a supply chain risk if the script itself is compromised or replaced. This capability represents an excessive permission for an AI agent, as it allows for direct interaction with the host system's shell. Avoid instructing the LLM to directly execute shell commands. Instead, expose specific, sandboxed API functions for necessary local processing. If local script execution is unavoidable, ensure it runs in a highly restricted, isolated environment (e.g., containerized, minimal permissions) and that the script itself is immutable and thoroughly audited. | LLM | SKILL.md:149 |
Scan History
Embed Code
[](https://skillshield.io/report/5722df5928c2c040)
Powered by SkillShield