Trust Assessment
adversarial-prompting received a trust score of 82/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Potential Command Injection via Script Execution, Arbitrary File Write to User's Home Directory.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via Script Execution The skill instructs the LLM to 'automatically export the complete output to a markdown file using `scripts/export_analysis.py`'. If the LLM constructs the shell command to execute this Python script, and if the arguments passed to the script (especially the `content` or `problem_summary` which can be influenced by user input or LLM-generated text) are not properly escaped or quoted at the shell level, a malicious user could inject arbitrary shell commands. For example, if the `problem_summary` could be manipulated to include shell metacharacters (e.g., `'; rm -rf /; #'`), and the LLM constructs `python scripts/export_analysis.py "..." "; rm -rf /; #"` without proper quoting, it would lead to command execution. While the Python script itself sanitizes `problem_summary` for filename creation, this sanitization happens *after* the shell has parsed the command, making it vulnerable to shell-level injection. The LLM should be instructed to call the `export_analysis` function directly as a tool/function call, passing arguments as structured data, rather than constructing a shell command. If shell execution is unavoidable, ensure all arguments are rigorously shell-escaped before execution. | LLM | SKILL.md | |
| MEDIUM | Arbitrary File Write to User's Home Directory The `scripts/export_analysis.py` script, which the skill instructs the LLM to use, writes arbitrary `content` to a file located in the user's home directory (`Path.home()`). While the `problem_summary` is sanitized for filename creation, the `content` itself is written directly without further validation. A malicious prompt could instruct the LLM to include sensitive information (e.g., system files, environment variables, or other confidential data accessible to the LLM) within the 'complete output' that is then written to this file. This creates a vector for data exfiltration if the generated file is later accessed or transmitted, or for arbitrary file creation in a user-controlled location, potentially leading to further compromise. 1. Restrict write location: Limit file writing to a dedicated, isolated, and temporary directory, not the user's home directory. 2. Content validation: Implement strict validation or sanitization of the `content` before writing it to a file, especially if it can be influenced by untrusted input. 3. User consent/review: For sensitive operations like writing to the filesystem, require explicit user confirmation or review of the content before the write occurs. | LLM | scripts/export_analysis.py:30 |
Scan History
Embed Code
[](https://skillshield.io/report/4634b9992b41a1e7)
Powered by SkillShield