Trust Assessment
scam-guards received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via unsanitized user input in script execution.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via unsanitized user input in script execution The skill's documentation explicitly defines several execution patterns where user-provided input (e.g., skill name, URL, wallet address, text content, event data) is directly appended to a shell command. If the environment executing these commands does not properly sanitize or escape these inputs before passing them to the shell, a malicious user could inject arbitrary shell commands. This pattern is repeated for multiple script invocations. The system responsible for executing these commands must implement robust input sanitization and escaping for all user-provided arguments. When using `subprocess` in Python, prefer passing arguments as a list (e.g., `subprocess.run(['python3', script_path, user_input])`) and avoid `shell=True` to prevent shell injection. For other languages, use equivalent safe execution methods. | LLM | SKILL.md:14 |
Scan History
Embed Code
[](https://skillshield.io/report/6191cde01f3964ac)
Powered by SkillShield