Trust Assessment
skill-creator received a trust score of 50/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 3 high, 0 medium, and 0 low severity. Key findings include Unsafe deserialization / dynamic eval, Prompt Injection via Unsanitized Skill Name in Generated SKILL.md, Data Exfiltration via Packaging All Skill Files.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via Unsanitized Skill Name in Generated SKILL.md The `scripts/init_skill.py` script directly embeds the user-provided `<skill-name>` into the `SKILL_TEMPLATE` and `EXAMPLE_SCRIPT` strings without any sanitization. If a malicious skill name containing LLM instructions (e.g., 'my-skill-{{LLM_INJECTION_PAYLOAD}}') is provided, it will be written into the generated `SKILL.md` and `scripts/example.py` files. When these generated files are later processed by an LLM, the embedded payload could manipulate the LLM's behavior, leading to prompt injection. Sanitize the `skill_name` input before embedding it into any template strings. Implement strict input validation to disallow characters that could be interpreted as LLM instructions or code. Consider using a templating engine that automatically escapes variables, or explicitly escape special characters like curly braces, backticks, or specific keywords that might trigger LLM behavior. | LLM | scripts/init_skill.py:19 | |
| HIGH | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/chindden/skill-creator/scripts/init_skill.py:67 | |
| HIGH | Data Exfiltration via Packaging All Skill Files The `scripts/package_skill.py` script is designed to include 'all files' within the specified skill directory into the distributable `.skill` (zip) file. This behavior, while intended, creates a high risk of data exfiltration. If a skill developer accidentally or intentionally includes sensitive files (e.g., API keys, `.env` files, private data, credentials) within their skill directory, these files will be packaged and subsequently 'shared with the user' when the `.skill` file is distributed. The script lacks any mechanism to filter or exclude sensitive data. Implement a robust filtering mechanism in `package_skill.py` to prevent sensitive files from being included. This could involve: 1) A whitelist of allowed file types/extensions, 2) A blacklist for common sensitive file names (e.g., `.env`, `*.key`, `id_rsa`), 3) Support for a `.skillignore` file to allow developers to explicitly exclude files, and 4) Clear warnings to users about the types of data that should never be included in a skill package. | LLM | scripts/package_skill.py:60 | |
| HIGH | Arbitrary File Write via Unrestricted Path Arguments Both `scripts/init_skill.py` and `scripts/package_skill.py` accept user-controlled path arguments (`--path` for `init_skill.py` and `output-directory` for `package_skill.py`). These paths are used to create directories and write files (e.g., the new skill directory, the packaged `.skill` file). While `Path.resolve()` is used, it does not prevent writing to arbitrary locations on the filesystem if a valid, but unintended, path is provided by an attacker (e.g., `/etc`, `/root`, `/var/www`). This could lead to overwriting critical system files, placing malicious executables, or consuming disk space in sensitive areas. Restrict the target directories for file creation and packaging to a designated, sandboxed, and non-sensitive area (e.g., a temporary directory within the skill's workspace). Implement strict validation on path inputs to prevent directory traversal (`..`) and absolute paths outside of the allowed root. Consider running these scripts within a sandboxed environment with restricted filesystem access permissions. | LLM | scripts/init_skill.py:15 |
Scan History
Embed Code
[](https://skillshield.io/report/e96b939e4cb0f8f3)
Powered by SkillShield