Security Audit
lead-research-assistant
github.com/skillcreatorai/Ai-Agent-SkillsTrust Assessment
lead-research-assistant received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Skill requests broad filesystem read access to 'codebase'.
The analysis covered 4 layers: dependency_graph, manifest_analysis, llm_behavioral_safety, static_code_analysis. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 11, 2026 (commit 6195a031). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Skill requests broad filesystem read access to 'codebase' The skill explicitly instructs the AI agent to 'analyze the codebase to understand the product' if run in a code directory. This implies broad read access to local files within the current working directory and its subdirectories. Without proper sandboxing or explicit file access controls, this could allow the agent to read sensitive files (e.g., configuration files, .env files, private keys, proprietary source code) and potentially expose them, leading to data exfiltration. Implement strict sandboxing for file system access, limiting the agent to specific, non-sensitive files or directories. Require explicit user confirmation for file reads. Avoid instructing agents to read entire codebases without clear scope and security boundaries. If codebase analysis is necessary, ensure only non-sensitive, pre-approved files are accessible. | Unknown | SKILL.md:59 |
Scan History
Embed Code
[](https://skillshield.io/report/e063b6e54a64992e)
Powered by SkillShield