Trust Assessment
grazer-skill received a trust score of 48/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 5 findings: 1 critical, 1 high, 2 medium, and 1 low severity. Key findings include Missing required field: name, Unpinned npm dependency version, Node lockfile missing.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings5
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted user input directly injected into LLM prompt The `generateLLMResponse` function in `src/notifications.ts` constructs an LLM prompt by directly interpolating `notification.content` (which originates from an untrusted `from_user`) into the prompt string. An attacker can craft malicious `notification.content` to manipulate the LLM's behavior, bypass safety measures, or extract sensitive information. This is a classic prompt injection vulnerability. Implement strict input sanitization or use a structured prompting approach (e.g., JSON-based input for the LLM where user content is a specific field, not directly interpolated into the main instruction). Consider using a separate, sandboxed LLM for untrusted input, or a content filter before passing to the LLM. | LLM | src/notifications.ts:169 | |
| HIGH | Potential data exfiltration via LLM prompt The `generateLLMResponse` function in `src/notifications.ts` sends `agentProfile` details (name, personality, responseStyle) along with untrusted user content (`notification.content`) to an external LLM service. While the `agentProfile` details shown are not inherently highly sensitive, this pattern demonstrates that internal application data is being exposed to the LLM alongside potentially malicious user input. An attacker exploiting the prompt injection vulnerability could instruct the LLM to reveal more sensitive internal data if it were present in the prompt or accessible to the LLM. Carefully review all data passed to the LLM, especially when combined with untrusted input. Ensure that no sensitive internal information is included in the prompt. If certain internal data is necessary for the LLM's function, consider using techniques like retrieval-augmented generation (RAG) or fine-tuning rather than direct prompt injection, and always sanitize or filter untrusted user input. | LLM | src/notifications.ts:169 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/scottcjn/grazer-skill/SKILL.md:1 | |
| MEDIUM | Unpinned npm dependency version Dependency 'axios' is not pinned to an exact version ('^1.6.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/scottcjn/grazer-skill/package.json | |
| LOW | Node lockfile missing package.json is present but no lockfile was found (package-lock.json, pnpm-lock.yaml, or yarn.lock). Commit a lockfile for deterministic dependency resolution. | Dependencies | skills/scottcjn/grazer-skill/package.json |
Scan History
Embed Code
[](https://skillshield.io/report/6ad391044695878d)
Powered by SkillShield