Trust Assessment
launch-strategy received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Untrusted skill attempts to read local file and modify LLM behavior.
The analysis covered 4 layers: manifest_analysis, llm_behavioral_safety, dependency_graph, static_code_analysis. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 16, 2026 (commit a04cb61a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted skill attempts to read local file and modify LLM behavior The untrusted skill explicitly instructs the host LLM to read a local file (`.claude/product-marketing-context.md`) and use its content to inform subsequent interactions. This is a direct prompt injection attempt, as it tries to manipulate the LLM's internal state and behavior based on external, potentially untrusted, data. It also represents a data exfiltration risk, as the skill is attempting to access and potentially incorporate arbitrary local file content into the LLM's context, which could contain sensitive information. Remove instructions that direct the LLM to read local files or access external resources without explicit user consent or a secure, sandboxed mechanism. If context is needed, it should be provided explicitly by the user or through a secure, pre-approved data source. | Unknown | SKILL.md:10 |
Scan History
Embed Code
[](https://skillshield.io/report/7c3727a43a5de562)
Powered by SkillShield