Trust Assessment
kameo received a trust score of 34/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 12 findings: 3 critical, 3 high, 5 medium, and 1 low severity. Key findings include Potential hardcoded secret (high entropy), Sensitive environment variable access: $USER, Node lockfile missing.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings12
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via Image Path in base64 command The `IMAGE_PATH` variable, which is user-controlled input, is directly interpolated into a `base64` shell command without proper sanitization or quoting. An attacker can inject arbitrary shell commands by providing a malicious `IMAGE_PATH` (e.g., `'; rm -rf /; #`). This allows for arbitrary code execution on the host system. Sanitize or strictly validate `IMAGE_PATH` to ensure it does not contain shell metacharacters. Alternatively, use a safer method to read and base64 encode the file, such as a Python script that handles file paths securely, or ensure the shell variable is properly quoted and escaped if it must be used directly in a command. | LLM | scripts/generate_video.sh:46 | |
| CRITICAL | Command Injection via Aspect Ratio in Heredoc The `ASPECT_RATIO` variable, which is user-controlled input, is directly interpolated into a heredoc block (`cat > "$REQUEST_FILE" << EOF`). An attacker can terminate the heredoc prematurely and inject arbitrary shell commands by providing a malicious `ASPECT_RATIO` value (e.g., `9:16"; rm -rf /; #`). This allows for arbitrary code execution on the host system. Sanitize or strictly validate `ASPECT_RATIO` to ensure it does not contain shell metacharacters or characters that can prematurely terminate the heredoc. A safer approach would be to pass these values as arguments to a more robust JSON generation tool or a Python script that handles string interpolation securely. | LLM | scripts/generate_video.sh:50 | |
| CRITICAL | Credential Harvesting / Data Exfiltration via Placeholder API Keys The `scripts/register.sh` script contains placeholder values for `SUPABASE_URL` and `SUPABASE_ANON_KEY`. If a user replaces these placeholders with attacker-controlled endpoints, the script will send user-provided email and password credentials to those malicious endpoints during the registration and login process. This design flaw creates a severe risk of credential harvesting and data exfiltration. Remove the `register.sh` script if it's not intended for production use with pre-configured, trusted endpoints. If it is intended for use, clearly warn users about the risks of replacing placeholders with untrusted values. Consider implementing a secure configuration mechanism that prevents arbitrary endpoint specification by end-users or hardcode trusted endpoints only. | LLM | scripts/register.sh:4 | |
| HIGH | JSON Injection via Email and Password in curl body The `EMAIL` and `PASSWORD` variables, which are user-controlled inputs, are directly interpolated into JSON strings within `curl` commands for signup and login. An attacker can inject arbitrary JSON by including quotes or other JSON-breaking characters in their email or password (e.g., `foo@bar.com","admin":true}`). This could lead to manipulation of the API request, potential authentication bypass, or creation of privileged accounts if the Supabase endpoint is vulnerable to such injection. Properly JSON-encode the `EMAIL` and `PASSWORD` variables before interpolating them into the `curl` command's data payload. Using `jq -Rs .` for each variable, similar to how `PROMPT` is handled in `generate_video.sh`, would prevent this injection. | LLM | scripts/register.sh:24 | |
| HIGH | Command Injection via Heredoc in credentials.json creation The `KAMEO_KEY` and `EMAIL` variables, which can contain untrusted data (derived from API responses and user input), are directly interpolated into a heredoc block used to create `~/.config/kameo/credentials.json`. An attacker could inject arbitrary shell commands by providing values that prematurely terminate the heredoc (e.g., `kam_...EOF\nrm -rf / #`). This allows for arbitrary code execution on the host system. Sanitize or strictly validate `KAMEO_KEY` and `EMAIL` to ensure they do not contain shell metacharacters or characters that can prematurely terminate the heredoc. A safer approach would be to use a dedicated file writing utility or a Python script that handles string interpolation securely when writing sensitive configuration files. | LLM | scripts/register.sh:77 | |
| HIGH | Command Injection via Python String Interpolation in enhance_prompt.sh The `IMAGE_PATH` and `DIALOGUE` variables, which are user-controlled inputs, are directly interpolated into Python string literals within the embedded Python script. If these variables contain single quotes (e.g., `foo.jpg'; import os; os.system('evil_command') #`), they can break out of the Python string and inject arbitrary Python code. This Python code can then execute arbitrary shell commands, leading to command injection. Avoid direct interpolation of user-controlled shell variables into Python string literals within heredocs. Instead, pass these values as arguments to the Python script (e.g., `python3 -c '...' "$IMAGE_PATH" "$DIALOGUE"`) and access them via `sys.argv` within Python. This ensures the shell handles quoting, and Python handles the arguments as safe strings. | LLM | scripts/enhance_prompt.sh:60 | |
| MEDIUM | Potential hardcoded secret (high entropy) A high-entropy string (entropy=5.00) was found in a credential-like context. Verify this is not a hardcoded secret. Use environment variables for sensitive values. | Static | skills/veya2ztn/kameo/SKILL.md:32 | |
| MEDIUM | Potential hardcoded secret (high entropy) A high-entropy string (entropy=5.00) was found in a credential-like context. Verify this is not a hardcoded secret. Use environment variables for sensitive values. | Static | skills/veya2ztn/kameo/SKILL.md:38 | |
| MEDIUM | Potential hardcoded secret (high entropy) A high-entropy string (entropy=5.00) was found in a credential-like context. Verify this is not a hardcoded secret. Use environment variables for sensitive values. | Static | skills/veya2ztn/kameo/SKILL.md:98 | |
| MEDIUM | Sensitive environment variable access: $USER Access to sensitive environment variable '$USER' detected in shell context. Verify this environment variable access is necessary and the value is not exfiltrated. | Static | skills/veya2ztn/kameo/scripts/register.sh:34 | |
| MEDIUM | Prompt Injection to Kameo API The `PROMPT` variable, which is user-controlled input, is sent directly to the Kameo API. If the Kameo backend utilizes a Large Language Model (LLM), this constitutes a direct prompt injection vulnerability. An attacker could craft a malicious prompt to manipulate the AI's behavior, generate unintended content, or potentially extract information if the Kameo API is susceptible. Implement robust input validation and sanitization for the `PROMPT` before sending it to the Kameo API. Consider using an LLM guardrail or a separate LLM to evaluate and filter potentially malicious prompts. Clearly document the risks of user-provided prompts for the Kameo AI. | LLM | scripts/generate_video.sh:48 | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/veya2ztn/kameo/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/3071387069b57370)
Powered by SkillShield