Security Audit
Luispitik/lead-research-brief:root
github.com/Luispitik/lead-research-briefTrust Assessment
Luispitik/lead-research-brief:root received a trust score of 81/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 3 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include User input directly embedded in search queries, Unsanitized user input in file paths for output generation, Skill instructed to read another skill's definition file.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on April 9, 2026 (commit b3e50aba). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Unsanitized user input in file paths for output generation The skill constructs output filenames (`brief_[org_slug]_[YYYYMMDD].html`, `brief_[org_slug]_[YYYYMMDD].docx`) using `org_slug`, which is derived from user-provided `org_name`. If `org_name` contains path traversal sequences (e.g., `../`, `/`), the skill could write files to arbitrary locations outside the intended `/mnt/user-data/outputs/` directory. This could lead to overwriting existing files, creating new files in sensitive directories, or potentially exfiltrating data by writing it to an accessible location. Implement robust sanitization for `org_slug` to remove or escape any path traversal characters (e.g., `../`, `/`, `\`) before using it in file paths. Ensure that the file writing mechanism enforces a strict base directory and prevents writing outside of it. | LLM | SKILL.md:120 | |
| MEDIUM | User input directly embedded in search queries User-provided data such as `org_name` or `sector` is directly used to construct search queries for `web_search` or `launch_extended_search_task`. A malicious user could inject instructions or manipulate the query to influence the search results or the LLM's subsequent processing of those results, potentially leading to prompt injection against the search tool or the LLM itself. Sanitize or escape user input before embedding it into search queries. Implement strict validation on the content of `org_name`, `sector`, etc., to prevent injection of commands or special characters that could manipulate the search tool or the LLM's interpretation. | LLM | SKILL.md:40 | |
| INFO | Skill instructed to read another skill's definition file The skill explicitly instructs the LLM to read `/mnt/skills/public/docx/SKILL.md` to generate the Word document. While this specific path appears to be a trusted, internal resource, it demonstrates that the LLM has the capability to read files from other skill directories. If this capability were to be exploited with a dynamic or user-controlled path, it could lead to unauthorized access to other skill definitions or sensitive files within the `/mnt/skills/` directory. Review the necessity of allowing skills to read other skill definition files. If absolutely necessary, ensure strict access controls and validation on paths to prevent arbitrary file reads. For this specific case, if the content of `docx/SKILL.md` is truly static and part of the skill's resources, consider embedding the relevant instructions directly or providing it as a dedicated, read-only resource for the skill rather than implying general file system access to other skill definitions. | LLM | SKILL.md:125 |
Scan History
Embed Code
[](https://skillshield.io/report/fadc92cbddf08006)
Powered by SkillShield