Security Audit
lead-research-assistant
github.com/ComposioHQ/awesome-claude-skillsTrust Assessment
lead-research-assistant received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Skill requests broad read access to local codebase.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit 27904475). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Skill requests broad read access to local codebase The skill explicitly instructs the host LLM to 'analyze the codebase' and 'look at what I'm building in this repository' to understand the user's product. This grants the LLM broad read access to the entire local repository, which is an extensive permission. Codebases frequently contain sensitive information (e.g., API keys, internal logic, proprietary algorithms, PII in test data). This broad access poses a significant data exfiltration risk if the LLM's environment or subsequent tools are compromised or mishandle the accessed data. 1. **User Consent & Warning:** Clearly inform the user about the scope of local file access and the potential risks involved before execution. 2. **Scope Limitation:** If possible, implement mechanisms to limit the LLM's access to specific directories or file types within the repository, rather than the entire codebase. 3. **Sandboxing:** Ensure the LLM's execution environment is strictly sandboxed to prevent any unauthorized storage, transmission, or execution of code based on the accessed data. 4. **Data Handling Policy:** Establish and enforce a clear policy on how the LLM processes and stores sensitive information from local files, ensuring it's not retained or exposed. | LLM | SKILL.md:48 |
Scan History
Embed Code
[](https://skillshield.io/report/959dcb6231f15813)
Powered by SkillShield