Trust Assessment
anthropic-frontend-design received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Potential Command Injection via user-controlled input in shell command.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Command Injection via user-controlled input in shell command The skill explicitly instructs the use of `python scripts/search.py` with user-provided input (e.g., `<product_type> <industry> <keywords>`). If the LLM directly interpolates this user input into a shell command without proper sanitization or escaping, it could lead to command injection, allowing an attacker to execute arbitrary commands on the host system. Implement robust input sanitization and validation for all user-provided arguments passed to shell commands. Ensure that user input is properly escaped or quoted to prevent it from being interpreted as shell commands or arguments. Consider using a safer method for executing external scripts that does not involve direct shell interpolation of user input, or ensure the execution environment is sandboxed. | LLM | SKILL.md:33 |
Scan History
Embed Code
[](https://skillshield.io/report/f471707ac45663cc)
Powered by SkillShield