Security Audit
competitor-alternatives
github.com/coreyhaines31/marketingskillsTrust Assessment
competitor-alternatives received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Prompt Injection Attempt via Persona Redefinition, Potential Data Exfiltration via Local File Read.
The analysis covered 4 layers: dependency_graph, manifest_analysis, llm_behavioral_safety, static_code_analysis. The llm_behavioral_safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 16, 2026 (commit a04cb61a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection Attempt via Persona Redefinition The untrusted skill content attempts to redefine the LLM's persona and goals ('You are an expert...', 'Your goal is...') from within the untrusted input block. This is a direct attempt at prompt injection, aiming to manipulate the host LLM's behavior and instructions. Remove all instructions that attempt to define the LLM's persona, role, or goals from within the untrusted content. These should be part of the trusted system prompt. | Unknown | SKILL.md:4 | |
| HIGH | Potential Data Exfiltration via Local File Read The untrusted skill content explicitly instructs the LLM to 'read' a local file (`.claude/product-marketing-context.md`). This poses a significant data exfiltration risk, as the LLM could be prompted to read and potentially expose sensitive information from the local filesystem if it has file access capabilities. While the file path is specific, it demonstrates an intent to access local data. Prevent the LLM from directly reading local files based on instructions from untrusted input. If file access is necessary, it should be mediated by a trusted tool with strict access controls and validation, not directly by the LLM based on untrusted prompts. | Unknown | SKILL.md:10 |
Scan History
Embed Code
[](https://skillshield.io/report/3f1b2a68a0733dd5)
Powered by SkillShield