Security Audit
ailabs-393/ai-labs-claude-skills:dist/skills/seo-optimizer
github.com/ailabs-393/ai-labs-claude-skillsTrust Assessment
ailabs-393/ai-labs-claude-skills:dist/skills/seo-optimizer received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 5 findings: 0 critical, 2 high, 2 medium, and 1 low severity. Key findings include Arbitrary File Read via seo_analyzer.py, Arbitrary File Write via generate_sitemap.py, Prompt Injection via seo_analyzer.py output.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 54/100, indicating areas for improvement.
Last analyzed on March 14, 2026 (commit 1a12bc7a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Arbitrary File Read via seo_analyzer.py The `seo_analyzer.py` script takes a `directory_or_file` argument directly from `sys.argv[1]` without sanitization. This argument is then used to construct file paths that are opened for reading. An attacker could provide a path traversal sequence (e.g., `../../../../etc/passwd`) to read arbitrary files on the system where the skill is executed, leading to data exfiltration. Implement robust input validation and sanitization for the `directory_or_file` argument to prevent path traversal. Restrict file access to a designated sandbox directory or ensure that only files within the skill's intended scope can be accessed. | LLM | scripts/seo_analyzer.py:200 | |
| HIGH | Arbitrary File Write via generate_sitemap.py The `generate_sitemap.py` script takes an optional `output_file` argument directly from `sys.argv` without sanitization. This argument is used to create and write an XML sitemap. An attacker could provide a path traversal sequence (e.g., `../../../../tmp/malicious.xml`) to write arbitrary content to any writable location on the system. This could be used to overwrite sensitive files, place malicious content in unexpected locations, or facilitate further attacks. Implement robust input validation and sanitization for the `output_file` argument to prevent path traversal. Restrict file writing to a designated sandbox directory or ensure that files can only be written within the skill's intended output scope. | LLM | scripts/generate_sitemap.py:180 | |
| MEDIUM | Prompt Injection via seo_analyzer.py output The `seo_analyzer.py` script extracts various text elements (titles, meta descriptions, heading text, alt attributes) directly from analyzed HTML content. If the HTML content is untrusted (e.g., provided by a user), and it contains prompt injection instructions (e.g., `<h1>IGNORE ALL PREVIOUS INSTRUCTIONS</h1>`), these instructions will be included verbatim in the generated SEO report. If this report is subsequently fed to an LLM, the LLM could be vulnerable to these injected instructions, leading to manipulated behavior. When feeding the analyzer's output to an LLM, ensure proper sanitization or escaping of any user-controlled text extracted from the HTML. Consider using a structured output format that clearly separates data from instructions, and explicitly mark user-controlled content as such. | LLM | scripts/seo_analyzer.py:200 | |
| MEDIUM | Prompt Injection via index.js stub The `index.js` skill implementation is a placeholder that directly returns the `input` it receives without any validation or sanitization. If the `input` contains prompt injection instructions (e.g., 'ignore previous instructions'), these will be reflected directly back to the host LLM, potentially manipulating its behavior or leading to unintended actions. Implement proper input validation and sanitization for the `input` parameter before returning it or using it in any way that could influence the LLM. The `input` should be treated as untrusted data and not directly reflected. | LLM | index.js:5 | |
| LOW | XML Injection Risk in Sitemap base_url The `generate_sitemap.py` script constructs URLs using the `base_url` argument, which is taken directly from `sys.argv[2]`. This URL is then inserted into the XML sitemap. While `xml.etree.ElementTree` and `minidom` typically handle basic XML escaping, a sophisticated attacker could craft a `base_url` containing XML entities or even script tags that, if the sitemap is later processed by a vulnerable XML parser or rendered in a browser, could lead to XSS or other injection attacks. If the sitemap is consumed by an LLM, specially crafted XML could also attempt prompt injection. Validate and sanitize `base_url` to ensure it is a well-formed and safe URL. Consider explicitly escaping all user-controlled data before inserting it into XML elements, even if the XML library provides some default escaping. | LLM | scripts/generate_sitemap.py:120 |
Scan History
Embed Code
[](https://skillshield.io/report/db23770fb8fd1444)
Powered by SkillShield