Trust Assessment
stakeholder-docs received a trust score of 74/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Excessive Read/Glob Permissions, Potential Data Exfiltration via Broad Read/Glob and Summarization.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Excessive Read/Glob Permissions The skill declares broad 'Read' and 'Glob' permissions, which allow it to access and list files anywhere on the filesystem. While the skill's description states it 'reads from .specweave/docs/internal/', this is a descriptive statement of intended use, not an enforced restriction. A malicious prompt could instruct the underlying LLM to use these broad permissions to read or list sensitive files outside the intended scope (e.g., /etc/passwd, ~/.ssh, application configuration files). Restrict file access permissions to the absolute minimum necessary. If the skill only needs to read from '.specweave/docs/internal/', specify this path explicitly in the permissions (e.g., 'Read:.specweave/docs/internal/**') rather than a global 'Read'. Implement robust input validation and sandboxing to prevent the LLM from accessing unintended file paths. | LLM | SKILL.md:1 | |
| HIGH | Potential Data Exfiltration via Broad Read/Glob and Summarization The skill's core function is to read and summarize technical documentation into business-friendly views. Combined with the broad 'Read' and 'Glob' permissions, there is a credible risk of data exfiltration. A malicious prompt could instruct the LLM to read sensitive files (e.g., credentials, private keys, proprietary code) from arbitrary locations on the filesystem (leveraging the excessive permissions) and then include their content within the generated 'executive summary' or 'feature status dashboard', effectively exfiltrating the data to the user who receives the generated output. Implement strict path-based access controls for 'Read' and 'Glob' permissions, limiting them to only the directories and file types absolutely required for the skill's operation. Additionally, sanitize or filter the content that can be included in generated summaries to prevent the accidental or intentional inclusion of sensitive data from unauthorized sources. Consider using a dedicated, sandboxed environment for file operations. | LLM | SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/681a3a94f9d8696b)
Powered by SkillShield