Security Audit
shanraisshan/claude-code-best-practice:.claude/skills/weather-transformer
github.com/shanraisshan/claude-code-best-practiceTrust Assessment
shanraisshan/claude-code-best-practice:.claude/skills/weather-transformer received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Untrusted external file used as direct LLM instructions (Second-order Prompt Injection).
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 24, 2026 (commit a4f7f2ec). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted external file used as direct LLM instructions (Second-order Prompt Injection) The skill's instructions, defined in `SKILL.md`, direct the LLM to read `weather-orchestration/input.md` and interpret its content as 'transformation instructions'. If an attacker can control the content of `weather-orchestration/input.md`, they can inject arbitrary instructions into the LLM's execution flow, effectively achieving a prompt injection. This could lead to data exfiltration (e.g., by instructing the LLM to read other files using the 'Read tool' and write their content to `weather-orchestration/output.md`), unauthorized actions using other available tools, or manipulation of the LLM's behavior. The explicit instruction 'Read the exact transformation from weather-orchestration/input.md - don't assume' confirms that the content of this file is treated as direct instructions for the LLM. Implement strict sanitization or validation of the content read from `weather-orchestration/input.md`. Instead of treating the file content as direct instructions for the LLM, parse it as structured data (e.g., JSON, YAML) with a predefined schema for allowed transformations (e.g., `{"operation": "add", "value": 10}`). Ensure the LLM only executes operations explicitly defined and allowed by the skill, rather than interpreting arbitrary text as instructions. Additionally, limit the scope of the 'Read tool' and 'Write tool' to only the necessary files and directories to prevent unauthorized file access. | LLM | SKILL.md:13 |
Scan History
Embed Code
[](https://skillshield.io/report/68b90ab6b5959e98)
Powered by SkillShield