Security Audit
lawvable/awesome-legal-skills:skills/skill-creator-anthropic
github.com/lawvable/awesome-legal-skillsTrust Assessment
lawvable/awesome-legal-skills:skills/skill-creator-anthropic received a trust score of 52/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 3 high, 1 medium, and 0 low severity. Key findings include Unsafe deserialization / dynamic eval, Command Injection via unsanitized skill name in generated script, Arbitrary File Write via unsanitized path argument in init_skill.py.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 26, 2026 (commit 4d82d4cf). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unsafe deserialization / dynamic eval Decryption followed by code execution Remove obfuscated code execution patterns. Legitimate code does not need base64-encoded payloads executed via eval, encrypted-then-executed blobs, or dynamic attribute resolution to call system functions. | Manifest | skills/skill-creator-anthropic/scripts/init_skill.py:67 | |
| HIGH | Command Injection via unsanitized skill name in generated script The `init_skill.py` script directly embeds the `skill_name` argument into the `EXAMPLE_SCRIPT` template without proper sanitization. If a malicious `skill_name` containing Python code (e.g., `my_skill"); import os; os.system("evil_command") #`) is provided, it could lead to arbitrary code execution when the generated `example_script.py` is later executed by an agent or system. Although `quick_validate.py` exists, `init_skill.py` does not call it, leaving this vector open if `init_skill.py` is run directly with untrusted input. Sanitize the `skill_name` argument using the validation logic from `quick_validate.py` (e.g., `re.match(r'^[a-z0-9-]+$', name)`) before embedding it into executable script templates. Alternatively, ensure that `init_skill.py` is only ever run with trusted inputs or that any generated scripts are validated before execution. | LLM | scripts/init_skill.py:60 | |
| HIGH | Arbitrary File Write via unsanitized path argument in init_skill.py The `init_skill.py` script uses the `path` argument directly to determine the base directory for creating a new skill. If a malicious `path` (e.g., `/etc/` or `../../../../evil_dir`) is provided, the script could create directories and write skill template files to arbitrary locations on the filesystem, potentially overwriting critical system files or placing malicious content in unexpected places. Validate the `path` argument to ensure it is within an allowed, restricted base directory or that it is a relative path that does not escape the intended skill creation scope. Implement path sanitization to prevent directory traversal. | LLM | scripts/init_skill.py:30 | |
| MEDIUM | Arbitrary File Write via unsanitized output_dir argument in package_skill.py The `package_skill.py` script uses the `output_dir` argument directly to determine where the final `.skill` file should be written. If a malicious `output_dir` (e.g., `/etc/` or `../../../../evil_dir`) is provided, the script could write the packaged skill file to an arbitrary location on the filesystem. While the content written is a `.skill` archive, placing it in sensitive locations could still pose a risk. Validate the `output_dir` argument to ensure it is within an allowed, restricted base directory or that it is a relative path that does not escape the intended output scope. Implement path sanitization to prevent directory traversal. | LLM | scripts/package_skill.py:50 |
Scan History
Embed Code
[](https://skillshield.io/report/5312fef0855ae2a5)
Powered by SkillShield