Trust Assessment
ai-automation-workflows received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 6 findings: 1 critical, 4 high, 1 medium, and 0 low severity. Key findings include Excessive Bash Permissions Declared, Untrusted Remote Script Execution (curl | sh), Potential Command Injection via Arbitrary Command Execution.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 3/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted Remote Script Execution (curl | sh) The skill demonstrates installing a CLI by piping a script directly from a remote URL to 'sh' (curl -fsSL https://cli.inference.sh | sh). This is a critical supply chain risk as it executes code from an external source without prior review. If the remote server or script is compromised, it could lead to arbitrary code execution on the host system. Avoid piping remote scripts directly to 'sh'. Instead, download the script, review its contents, and then execute it locally. Consider using package managers or verified installation methods. | LLM | SKILL.md:19 | |
| HIGH | Excessive Bash Permissions Declared The skill declares 'Bash(*)' permissions, granting it the ability to execute arbitrary shell commands. While this skill demonstrates automation workflows that may require broad shell access, this level of permission is excessive and significantly increases the attack surface. An attacker could leverage this to execute malicious commands, access sensitive files, or exfiltrate data if the agent is prompted to run untrusted code. Restrict Bash permissions to a minimal set of specific commands or a more constrained shell environment if possible. Avoid 'Bash(*)' unless absolutely necessary and thoroughly justified. | LLM | SKILL.md:1 | |
| HIGH | Potential Command Injection via Arbitrary Command Execution The 'run_with_alert' function uses '$("$@" 2>&1)' to execute commands. If the arguments passed to this function (represented by '$@') are derived from untrusted or user-controlled input, an attacker could inject arbitrary shell commands, leading to command injection. The declared 'Bash(*)' permission would allow such an exploit. Avoid executing arbitrary commands from untrusted input. If dynamic command execution is necessary, strictly validate and sanitize all arguments, or use a safer method like a fixed command with carefully controlled parameters. | LLM | SKILL.md:225 | |
| HIGH | Potential Command Injection via File Content Embedding The 'data_processing.sh' script uses '$(cat $file)' to embed the content of a file directly into an LLM prompt. If the '$file' variable is user-controlled and contains shell metacharacters (e.g., 'foo.txt; rm -rf /'), it could lead to command injection. The declared 'Bash(*)' permission would allow such an exploit. Strictly validate and sanitize file paths before using them in shell commands. Avoid directly embedding raw file content from untrusted sources into commands. Use safer methods to read file content, such as 'read' or explicit file handling functions, and escape any special characters. | LLM | SKILL.md:279 | |
| HIGH | Data Exfiltration via LLM Prompt (File Content) The 'data_processing.sh' script uses '$(cat $file)' to embed the content of a file directly into an LLM prompt. If the '$file' contains sensitive data, this pattern could lead to the exfiltration of that data to the external LLM service (openrouter/claude-haiku-45). Avoid sending raw file content, especially from potentially sensitive files, directly to external LLM services without explicit user consent or prior sanitization. Implement data filtering or masking for sensitive information. | LLM | SKILL.md:279 | |
| MEDIUM | Data Exfiltration via External Webhook The 'run_with_alert' function demonstrates sending command output ('$result') and the executed command ('$*') to an external webhook (https://your-webhook.com/alert). If the webhook URL is controlled by an attacker or if sensitive information is present in the command output or arguments, this could lead to data exfiltration. Ensure all external endpoints are trusted. Filter or redact any sensitive information from command outputs and arguments before sending them to external services. Consider using secure, authenticated channels for alerts. | LLM | SKILL.md:230 |
Scan History
Embed Code
[](https://skillshield.io/report/a5e2b25392223c33)
Powered by SkillShield