Trust Assessment
referral-program received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Prompt Injection via Role-Setting and Instruction, Data Exfiltration / Excessive Permissions via Local File Read Instruction.
The analysis covered 4 layers: manifest_analysis, llm_behavioral_safety, static_code_analysis, dependency_graph. The llm_behavioral_safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 16, 2026 (commit a04cb61a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via Role-Setting and Instruction The untrusted skill content attempts to set the LLM's persona and provide direct instructions, such as 'You are an expert in viral growth and referral marketing. Your goal is to help design and optimize programs...' and 'If .claude/product-marketing-context.md exists, read it before asking questions. Use that context...'. This is a direct attempt to manipulate the host LLM's behavior and internal state from untrusted input, which is a critical prompt injection vulnerability. Remove all direct instructions and persona-setting from untrusted content. The LLM's role and capabilities should be defined by the system prompt, not by user-provided or untrusted skill content. | Unknown | SKILL.md:5 | |
| HIGH | Data Exfiltration / Excessive Permissions via Local File Read Instruction The untrusted skill explicitly instructs the LLM to read local files, for example: 'If .claude/product-marketing-context.md exists, read it before asking questions. Use that context...' and 'For examples and incentive sizing: See [references/program-examples.md]'. This grants the LLM access to the local filesystem, which could be exploited to exfiltrate sensitive data if the paths could be manipulated or if the referenced files contain confidential information. Even if the files are benign, the capability to read local files from untrusted input represents an excessive permission. Prevent the LLM from directly accessing local files based on instructions from untrusted content. If context is needed, it should be explicitly provided to the LLM by the system or through a controlled, sandboxed mechanism that validates file paths and content. Do not allow untrusted content to dictate file system operations. | Unknown | SKILL.md:9 |
Scan History
Embed Code
[](https://skillshield.io/report/2048107ac960c42d)
Powered by SkillShield