Trust Assessment
starwars received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Untrusted instructions for LLM agent behavior, LLM instructed to construct and execute shell commands with user input.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted instructions for LLM agent behavior The `SKILL.md` document, which is treated as untrusted input, contains explicit instructions for the AI agent on how to use the skill (e.g., 'Run `./starwars people "name"`'). This constitutes a prompt injection vulnerability, as an attacker could modify this documentation to manipulate the agent's behavior, potentially leading to unintended actions or security breaches. Move agent-specific instructions out of untrusted documentation (e.g., `SKILL.md`) and into a trusted configuration or prompt engineering layer. The LLM should not derive its operational instructions from user-editable skill documentation. | LLM | SKILL.md:68 | |
| HIGH | LLM instructed to construct and execute shell commands with user input The untrusted `SKILL.md` instructs the AI agent to construct and execute shell commands using user-provided input (e.g., `starwars people "name"`). If the AI agent does not properly sanitize or escape the 'name' parameter before incorporating it into the command string and executing it, a malicious user could inject arbitrary shell commands. This is a direct exploit path enabled by the prompt injection vulnerability (SS-LLM-001). 1. Address the prompt injection (SS-LLM-001) by moving agent instructions to a trusted source. 2. Ensure the AI agent strictly sanitizes and escapes all user-provided input before incorporating it into shell commands. Ideally, use a structured API call or a dedicated, secure library function instead of direct shell command execution for user-facing interactions. If shell execution is unavoidable, use parameterized commands or robust escaping mechanisms. | LLM | SKILL.md:73 |
Scan History
Embed Code
[](https://skillshield.io/report/952b1d6d39b0b2c0)
Powered by SkillShield