Security Audit
product-marketing-context
github.com/coreyhaines31/marketingskillsTrust Assessment
product-marketing-context received a trust score of 86/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Excessive permissions: Broad codebase read access for drafting.
The analysis covered 4 layers: dependency_graph, manifest_analysis, llm_behavioral_safety, static_code_analysis. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 16, 2026 (commit a04cb61a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Excessive permissions: Broad codebase read access for drafting The skill instructs the host LLM to 'Read the codebase: README, landing pages, marketing copy, about pages, meta descriptions, package.json, any existing docs' to auto-draft a product marketing context document. This grants excessively broad read permissions to the entire repository, including potentially sensitive files (e.g., configuration files, internal documentation, `.env` files, or private source code). If the LLM incorporates sensitive information from these files into the generated `.claude/product-marketing-context.md` document, it could lead to data exfiltration. The phrase 'any existing docs' is particularly vague and dangerous, allowing access to an undefined scope of files. Restrict the scope of files the LLM is allowed to read. Instead of a blanket 'codebase' or 'any existing docs', specify a precise whitelist of non-sensitive file types and directories (e.g., only `.md`, `.txt` files in `docs/` or `public/` directories). Explicitly exclude sensitive directories, configuration files, and files known to contain secrets (e.g., `.env`, `config.*`, `secrets.*`). Implement robust sandboxing for file access to prevent unauthorized reads. | Unknown | SKILL.md:29 |
Scan History
Embed Code
[](https://skillshield.io/report/9aedcea2fbc4640a)
Powered by SkillShield