Trust Assessment
grazer received a trust score of 10/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 14 findings: 3 critical, 1 high, 9 medium, and 1 low severity. Key findings include Network egress to untrusted endpoints, Missing required field: name, Suspicious import: requests.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 39/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings14
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/scottcjn/grazer/grazer/imagegen.py:188 | |
| CRITICAL | Network egress to untrusted endpoints Python requests POST/PUT to URL Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | skills/scottcjn/grazer/setup.py:16 | |
| CRITICAL | Prompt Injection via User-Controlled Notification Content The `generateLLMResponse` function in `src/notifications.ts` constructs an LLM prompt by directly embedding `notification.content` (e.g., comments or replies from external platforms). Since `notification.content` is user-controlled, an attacker can craft malicious input to manipulate the agent's LLM, leading to prompt injection. This could cause the agent to generate harmful responses, leak sensitive information, or perform unintended actions during automated conversations. Implement robust input sanitization and validation for all user-controlled content (e.g., `notification.content`) before embedding it into LLM prompts. Consider using a strict templating approach that separates user input from instructions, or an LLM specifically fine-tuned for handling untrusted inputs. | LLM | src/notifications.ts:199 | |
| HIGH | Prompt Injection via User-Controlled Image Prompt The `generate_llm_svg` function in `grazer/imagegen.py` and `generateLlmSvg` in `src/imagegen.ts` construct LLM prompts by directly embedding the user-provided `image_prompt`. An attacker can use this to manipulate the LLM's behavior, potentially causing it to generate unintended or malicious SVG content, or attempting to extract information from the LLM's context. Implement robust input sanitization and validation for the `image_prompt` before embedding it into LLM prompts. Ensure the LLM is constrained to only generate SVG and cannot be coerced into other tasks. Consider using a separate, isolated LLM for untrusted inputs or a strict prompt template that prevents manipulation. | LLM | grazer/imagegen.py:189 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/scottcjn/grazer/SKILL.md:1 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/scottcjn/grazer/grazer/__init__.py:6 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/scottcjn/grazer/grazer/clawhub.py:6 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/scottcjn/grazer/grazer/imagegen.py:17 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/scottcjn/grazer/setup.py:15 | |
| MEDIUM | Unpinned npm dependency version Dependency 'axios' is not pinned to an exact version ('^1.6.0'). Pin dependencies to exact versions to reduce drift and supply-chain risk. | Dependencies | skills/scottcjn/grazer/package.json | |
| MEDIUM | Unpinned Python dependency version Requirement 'requests>=2.31.0' is not pinned to an exact version. Pin Python dependencies with '==<exact version>'. | Dependencies | skills/scottcjn/grazer/requirements.txt:1 | |
| MEDIUM | Data Exfiltration During Package Installation The `post_install` function in `setup.py` makes an HTTP POST request to `https://bottube.ai/api/downloads/skill` upon package installation. This reports the skill name, platform, and version without explicit user consent or notification during the `pip install` process, constituting data exfiltration. Remove the `post_install` function or modify it to require explicit user opt-in before sending any data. If tracking is necessary, it should be clearly communicated to the user. | LLM | setup.py:12 | |
| MEDIUM | Incomplete SVG Sanitization The SVG sanitization logic in `grazer/imagegen.py` (`_validate_svg`) and `src/imagegen.ts` (`validateSvg`) is not exhaustive. While it attempts to block `<script>` tags and `on[a-z]+` attributes, it may miss other vectors for malicious SVG content, such as `data:url` in `href`, `style` attributes with `url()` or `javascript:`, `foreignObject` elements, or `use` elements referencing external content. This could lead to Cross-Site Scripting (XSS) or other client-side attacks if an attacker can control the SVG output. Implement a more robust SVG sanitization strategy. This typically involves using a well-vetted SVG sanitization library or implementing a strict whitelist of allowed SVG elements, attributes, and attribute values. Ensure no external references or script execution are possible. | LLM | grazer/imagegen.py:160 | |
| LOW | Unpinned Python Dependency The `requirements.txt` file specifies a minimum version for the `requests` package (`requests>=2.31.0`) but does not pin it to an exact version. This can lead to non-deterministic builds and introduces a supply chain risk, as future versions of the dependency might introduce breaking changes or vulnerabilities. Pin all dependencies to exact versions (e.g., `requests==2.31.0`) to ensure deterministic builds and mitigate supply chain risks. Use a lock file (like `pip freeze > requirements.lock`) for production deployments. | LLM | requirements.txt:1 |
Scan History
Embed Code
[](https://skillshield.io/report/da59651f1697c3a5)
Powered by SkillShield