Security Audit
lawvable/awesome-legal-skills:skills/meeting-briefing-anthropic
github.com/lawvable/awesome-legal-skillsTrust Assessment
lawvable/awesome-legal-skills:skills/meeting-briefing-anthropic received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Broad access to sensitive data sources.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 26, 2026 (commit 4d82d4cf). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Broad access to sensitive data sources The skill's core functionality, as described, requires extensive access to highly sensitive and confidential data across multiple enterprise systems. It instructs the agent to 'Pull relevant information from each connected source' including Calendar, Email, Chat, Documents (e.g., Box, Egnyte, SharePoint), Contract Lifecycle Management (CLM), and Customer Relationship Management (CRM). This implies broad read access to potentially all data within these systems. While necessary for the skill's stated purpose of preparing legal briefings, this wide scope of data access significantly increases the attack surface. A successful prompt injection or other compromise could lead to the unauthorized extraction or misuse of sensitive legal, financial, and personal information that the agent has access to. Implement granular access controls for each connected data source, ensuring the agent can only access the minimum necessary information for a specific task. Clearly define and enforce the scope of 'relevant information' to prevent over-collection. Consider using data redaction or anonymization techniques where possible before data is presented to the LLM. Ensure robust prompt injection defenses are in place to prevent misuse of these broad permissions. | LLM | SKILL.md:47 |
Scan History
Embed Code
[](https://skillshield.io/report/c4623d0dbe7f057d)
Powered by SkillShield