Trust Assessment
appdeploy received a trust score of 50/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 1 critical, 3 high, 1 medium, and 0 low severity. Key findings include Dangerous tool allowed: Bash, Excessive Permissions: Bash tool declared, Data Exfiltration: Arbitrary file read via src_read tool.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 33/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Excessive Permissions: Bash tool declared The skill explicitly declares 'Bash' as an allowed tool. This grants the LLM the ability to execute arbitrary shell commands in its environment. While necessary for some operations, this permission significantly increases the attack surface for command injection and data exfiltration, especially when combined with tools that take user-controlled input for file paths, globs, or patterns without strict sanitization. Review if Bash access is strictly necessary. If so, ensure all interactions with Bash are through highly-sanitized, purpose-built functions, and consider sandboxing the execution environment. Minimize the scope of what Bash can do. | LLM | Manifest (frontmatter JSON):1 | |
| HIGH | Dangerous tool allowed: Bash The skill allows the 'Bash' tool without constraints. This grants arbitrary command execution. Remove unconstrained shell/exec tools from allowed-tools, or add specific command constraints. | Static | skills/tariqsumatri82/appdeploy-1-0-5/SKILL.md:1 | |
| HIGH | Data Exfiltration: Arbitrary file read via src_read tool The `src_read` tool allows reading arbitrary files by specifying a `file_path`. Combined with the declared 'Bash' permission, a malicious user prompt could instruct the LLM to read sensitive files (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, `/proc/self/environ`) from its execution environment. The skill does not specify any sanitization or restriction on the `file_path` parameter, creating a direct and credible path for data exfiltration. Implement strict allow-listing or sandboxing for `file_path` parameters. Ensure `src_read` cannot access sensitive system files or user data outside a designated sandbox. All file paths should be validated against a safe directory or a list of allowed files. | LLM | SKILL.md:160 | |
| HIGH | Command Injection: Unsanitized glob/pattern parameters in src_glob and src_grep The `src_glob` tool takes a `glob` parameter, and the `src_grep` tool takes `pattern` (regex) and `glob` parameters. If these parameters are passed directly to shell commands (e.g., `find`, `grep`) via the 'Bash' tool without proper escaping or sanitization, they can be exploited for command injection. A malicious user could provide a crafted `glob` or `pattern` to execute arbitrary commands. Ensure all user-controlled parameters passed to shell commands are strictly sanitized, escaped, or run within a sandboxed environment that prevents arbitrary command execution. Consider using safer, non-shell-based file system APIs for globbing and grepping. | LLM | SKILL.md:138 | |
| MEDIUM | Data Exfiltration: Deploying sensitive local files via deploy_app The `deploy_app` tool allows specifying an array of `files` to write to the deployment target. Combined with the 'Bash' permission and the `src_read` tool, an attacker could instruct the LLM to read sensitive local files (e.g., using `src_read` or a direct Bash command) and then include their content in the `files` array for deployment to a remote server, effectively exfiltrating data from the agent's environment. Implement strict validation on the content and source of files passed to `deploy_app`. Ensure the agent cannot be tricked into deploying sensitive local files. Consider requiring explicit user confirmation for deploying files originating from the agent's local environment. | LLM | SKILL.md:80 |
Scan History
Embed Code
[](https://skillshield.io/report/17020cb15a972832)
Powered by SkillShield