Trust Assessment
skill-creator received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 4 findings: 2 critical, 2 high, 0 medium, and 0 low severity. Key findings include Arbitrary File Creation/Write via Path Traversal in `init_skill.py`, Arbitrary File Read/Package and Write in `package_skill.py`, Arbitrary File Read in `quick_validate.py`.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The Static Code Analysis layer scored lowest at 25/100, indicating areas for improvement.
Last analyzed on February 16, 2026 (commit ccf6204f). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary File Creation/Write via Path Traversal in `init_skill.py` The `scripts/init_skill.py` script allows arbitrary file creation and writing to sensitive locations due to insufficient validation of the `skill-name` argument. The `skill-name` is directly concatenated with the `output-path` to form the target directory, without sanitizing for path traversal sequences (e.g., `../`). An attacker could provide a `skill-name` like `../../../../etc/malicious_dir` to create files outside the intended skill directory, potentially overwriting system files or creating new ones in critical locations. The `--path` argument also lacks sufficient sanitization to prevent writing to arbitrary system paths. If an LLM with code execution capabilities were prompted to execute this script with malicious arguments, it could lead to system compromise. Implement strict validation for `skill-name` to ensure it only contains alphanumeric characters and hyphens, and does not contain path separators or traversal sequences. Additionally, sanitize the `--path` argument to ensure it points to a safe, restricted directory, or use a chroot/sandbox environment for script execution. | Static | scripts/init_skill.py:60 | |
| CRITICAL | Arbitrary File Read/Package and Write in `package_skill.py` The `scripts/package_skill.py` script allows an attacker to read and package arbitrary files from the filesystem and write the resulting `.skill` (zip) file to an arbitrary location. The `skill_path` argument, which specifies the directory to package, is not validated to restrict it to a safe scope. An attacker could provide a `skill_path` like `/` or `~/.ssh` to exfiltrate sensitive system files or user credentials. Similarly, the `output_dir` argument is not sufficiently sanitized, allowing the attacker to specify any directory for the output `.skill` file, potentially overwriting critical system files or placing malicious content in unexpected locations. If an LLM with code execution capabilities were prompted to execute this script with malicious arguments, it could lead to data exfiltration or system compromise. Implement strict validation for `skill_path` to ensure it is within an allowed, restricted directory (e.g., a temporary sandbox). Implement strict validation for `output_dir` to ensure it points to a safe, designated output location, or use a chroot/sandbox environment for script execution. | Static | scripts/package_skill.py:48 | |
| HIGH | Arbitrary File Read in `quick_validate.py` The `scripts/quick_validate.py` script attempts to read the `SKILL.md` file from a path provided as a command-line argument (`skill_path`). This `skill_path` argument is not validated to restrict it to a safe directory. An attacker could provide a `skill_path` pointing to a sensitive system directory (e.g., `/etc`) to attempt to read files like `/etc/SKILL.md`, leading to information disclosure. While the script specifically looks for `SKILL.md`, the ability to probe arbitrary directories for files is a security risk. If an LLM with code execution capabilities were prompted to execute this script with malicious arguments, it could lead to information disclosure. Implement strict validation for `skill_path` to ensure it is within an allowed, restricted directory (e.g., a temporary sandbox). | Static | scripts/quick_validate.py:12 | |
| HIGH | LLM analysis found no issues despite critical deterministic findings Deterministic layers flagged 2 CRITICAL findings, but LLM semantic analysis returned clean. This may indicate prompt injection or analysis evasion. | LLM | (sanity check) |
Scan History
Embed Code
[](https://skillshield.io/report/3cbfbb9f8aa4763f)
Powered by SkillShield