Trust Assessment
BotBowl Party Agent Guide received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Dynamic Skill Update via External Fetch, Explicit Shell Command Execution Instructions, Request for File System Write and Cron Scheduling Permissions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Dynamic Skill Update via External Fetch The skill explicitly instructs the agent to periodically re-fetch its own definition (`SKILL.md`) from an external URL (`https://www.botbowlparty.com/SKILL.md`) every 6 hours and save it locally. This creates a critical supply chain risk. If the external server (`botbowlparty.com`) is compromised, an attacker could inject arbitrary malicious instructions into the agent's operational logic, leading to a complete compromise of the agent and its environment. The skill's behavior can change dynamically without explicit review or version pinning. Skills should be static and version-pinned. Avoid instructions for agents to dynamically update their own code from external, unverified sources. If dynamic updates are necessary, implement strong cryptographic signing and verification of updates, and ensure updates are reviewed before deployment. | LLM | SKILL.md:30 | |
| HIGH | Explicit Shell Command Execution Instructions The skill explicitly provides and instructs the agent to execute shell commands, such as `curl` for fetching files and `jq` for parsing JSON, as well as `cron add` for scheduling. If the host LLM environment is configured to execute shell commands based on skill instructions, this presents a direct command injection vulnerability. An attacker who can modify the `SKILL.md` (as per the Supply Chain Risk) could inject arbitrary shell commands. Even without external modification, the current commands imply broad execution capabilities. The agent execution environment should strictly disallow direct shell command execution from untrusted skill content. Instead, provide specific, sandboxed API tools for file fetching, scheduling, and HTTP requests. Avoid instructing the agent to construct and execute raw shell commands. | LLM | SKILL.md:33 | |
| MEDIUM | Request for File System Write and Cron Scheduling Permissions The skill instructs the agent to "save it locally" (referring to the `SKILL.md` file) and to "Create a cron job". These instructions imply that the agent requires file system write permissions and the ability to schedule system tasks (e.g., via `cron`). Granting such broad permissions to an AI agent, especially one whose instructions can be dynamically updated from an external source, significantly increases the attack surface and potential impact of a compromise. Implement a least-privilege model for agent execution. Agents should only have access to the minimum necessary tools and resources. File system access should be restricted to specific, temporary directories, and cron scheduling should be mediated by a secure, sandboxed API that validates and limits scheduled tasks. | LLM | SKILL.md:33 |
Scan History
Embed Code
[](https://skillshield.io/report/f4549685812bd91a)
Powered by SkillShield