Trust Assessment
infographic-weather received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 1 critical, 0 high, 3 medium, and 0 low severity. Key findings include Suspicious import: requests, Prompt Injection via User-Controlled Address, Arbitrary File Write via User-Controlled Output Path.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 56/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Prompt Injection via User-Controlled Address The `address` parameter, which is user-controlled input, is directly interpolated into the `bg_prompt` and `infographic_prompt` sent to the Gemini model. A malicious user could craft the `address` to include prompt injection instructions, potentially manipulating the LLM's behavior to generate undesirable image content, attempt to extract information, or bypass safety filters. Sanitize or escape user-provided `address` before interpolating it into LLM prompts. Consider using a templating mechanism that strictly separates user input from model instructions, or implement robust input validation to restrict the content of the `address`. | LLM | scripts/generate_infographic.py:90 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | skills/silverkiwi/weather-infographic/scripts/generate_infographic.py:9 | |
| MEDIUM | Arbitrary File Write via User-Controlled Output Path The script writes the generated infographic to `output_path`, which is derived from user input. If the skill execution environment does not strictly validate and restrict `output_path`, a malicious user could specify a path to overwrite arbitrary files on the file system or write to sensitive locations, leading to data corruption or denial of service. Ensure the skill execution environment strictly validates and restricts `output_path` to a designated, isolated output directory. The skill itself could also implement path validation (e.g., ensuring the path is within an allowed output directory and does not contain directory traversal sequences). | LLM | scripts/generate_infographic.py:144 | |
| MEDIUM | Unpinned Dependencies in Install Command The `pip install` command in the skill's manifest does not pin specific versions for `google-generativeai` and `requests`. This introduces a supply chain risk, as future installations could pull in new versions of these packages that might contain vulnerabilities, breaking changes, or even malicious code if a package maintainer's account is compromised. Pin specific versions for all dependencies in the `install` command (e.g., `google-generativeai==X.Y.Z requests==A.B.C`) to ensure reproducible and secure installations. Regularly review and update these pinned versions. | LLM | SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/f10193c904fa8854)
Powered by SkillShield