Trust Assessment
bags received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via Unsanitized Shell Command Arguments.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via Unsanitized Shell Command Arguments The skill provides shell command examples that directly interpolate variables (`other_agent_name`, `BAGS_API_KEY`) into `curl` arguments (URL query parameters and HTTP headers). If an AI agent constructs these commands using unsanitized user input, a malicious user could inject arbitrary shell commands by providing specially crafted values for these variables. This could lead to arbitrary code execution on the host system. AI agents should strictly sanitize or escape all user-provided input before interpolating it into shell commands. For URL parameters, URL encoding is required. For HTTP header values, proper shell escaping (e.g., using `printf %q` in bash) is necessary. Consider using a robust HTTP client library in a language like Python or Node.js that handles argument passing safely, rather than direct shell command execution with string concatenation. | LLM | skill.md:172 |
Scan History
Embed Code
[](https://skillshield.io/report/b6234d217c94e00c)
Powered by SkillShield