Security Audit
garrettjsmith/localseoskills:skills/client-deliverables
github.com/garrettjsmith/localseoskillsTrust Assessment
garrettjsmith/localseoskills:skills/client-deliverables received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 2 critical, 0 high, 0 medium, and 0 low severity. Key findings include Untrusted content attempts to instruct LLM on tool usage, Untrusted content attempts to define LLM persona and goals.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on March 26, 2026 (commit 0d3fc105). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted content attempts to instruct LLM on tool usage The skill's `SKILL.md` file, which is marked as untrusted input, contains explicit instructions for the LLM on which tool to use (`localseodata-tool`) and how to use its specific functions. This is a prompt injection attempt, as tool usage instructions should come from trusted system prompts or tool definitions, not from untrusted skill content. Remove all direct instructions to the LLM regarding tool usage from the untrusted `SKILL.md` file. Tool definitions and usage guidelines should be provided to the LLM via trusted system prompts or tool schemas. | LLM | SKILL.md:5 | |
| CRITICAL | Untrusted content attempts to define LLM persona and goals The skill's `SKILL.md` file, which is marked as untrusted input, contains instructions that attempt to define the LLM's persona and primary goals. This is a prompt injection attempt, as the LLM should not accept behavioral instructions from untrusted sources. Remove all direct instructions to the LLM (e.g., 'You are...', 'Your goal is...') from the untrusted `SKILL.md` file. LLM persona and instructions should be defined in trusted system prompts or configuration. | LLM | SKILL.md:7 |
Scan History
Embed Code
[](https://skillshield.io/report/e94e529c6832cfd0)
Powered by SkillShield