Trust Assessment
lead-gen-website received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Code Injection in Generated Web Pages, Content Injection in robots.txt and sitemap.xml, Content Injection in Generated Markdown Structure.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Code Injection in Generated Web Pages The `generate_pages_batch.py` script performs direct string replacement of user-provided data (from `data_file`) into `.tsx` template files. This allows an attacker to inject arbitrary code (e.g., JavaScript for Cross-Site Scripting) into the generated web application pages. When these pages are compiled and served, the injected code will execute in the user's browser, leading to a severe client-side vulnerability. Implement robust input sanitization and escaping for all user-provided data before inserting it into templates. For `.tsx` files, ensure that content intended for display is HTML-escaped, and content intended as code (e.g., component names, variable names) is strictly validated or not directly inserted from untrusted sources. Consider using a secure templating engine that automatically escapes output by default. | LLM | scripts/generate_pages_batch.py:24 | |
| HIGH | Content Injection in robots.txt and sitemap.xml The `create_seo_files.py` script directly embeds user-provided `domain` and `url` values into `robots.txt` and `sitemap.xml` without proper sanitization. A malicious `domain` or `url` could contain characters that lead to Cross-Site Scripting (XSS) if these files are viewed in a browser without proper content-type headers, or could inject malformed directives that negatively impact search engine crawling and indexing. Validate and sanitize `domain` and `url` inputs to ensure they conform to valid URL structures and do not contain malicious characters. For XML output, ensure proper XML escaping is applied to all dynamic content. | LLM | scripts/create_seo_files.py:10 | |
| MEDIUM | Content Injection in Generated Markdown Structure The `generate_content_structure.py` script directly inserts user-provided data (e.g., `site_name`, `page.name`, `page.title`, `section.content`) from a JSON specification into a Markdown file without sanitization. If this Markdown file is later rendered in a context that interprets HTML (which Markdown often does), an attacker could inject malicious HTML or JavaScript, leading to Cross-Site Scripting (XSS). Sanitize all user-provided data before inserting it into the Markdown file. Specifically, escape any HTML or Markdown special characters that could be interpreted maliciously when the Markdown is rendered. | LLM | scripts/generate_content_structure.py:17 |
Scan History
Embed Code
[](https://skillshield.io/report/5a4a319b5b0ac6c3)
Powered by SkillShield