Trust Assessment
toolbox-talk-generator received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Missing required field: name, Direct embedding of untrusted input into LLM prompt, Untrusted input embedded into generated talk content, potential for downstream prompt injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Direct embedding of untrusted input into LLM prompt The `generate_with_llm` function constructs an LLM prompt by directly embedding the `topic` and `context` parameters using an f-string. If these parameters originate from untrusted user input, a malicious user could inject instructions into the prompt, manipulating the behavior of the underlying LLM to perform unintended actions or reveal sensitive information. Implement robust input sanitization or use a structured prompting approach (e.g., JSON-based prompts, tool calls) that clearly separates user input from system instructions. If direct embedding is necessary, escape or filter potentially malicious characters/phrases to prevent prompt injection. | LLM | SKILL.md:371 | |
| HIGH | Untrusted input embedded into generated talk content, potential for downstream prompt injection The `generate_daily_talk` function takes `weather` and `activities` as string inputs and directly embeds them into the `custom_points` list, which becomes part of the `ToolboxTalk` object's `key_points`. The `format_talk_script` function then directly incorporates these `key_points` into the final formatted script. If this formatted script is subsequently used as input for an LLM (e.g., as the `context` parameter in a function like `generate_with_llm` demonstrated in the same skill), a malicious user could inject prompt instructions via the `weather` or `activities` inputs, leading to prompt injection. Sanitize or validate `weather` and `activities` inputs before embedding them into the `ToolboxTalk` content. If the generated talk is intended for LLM consumption, ensure that the LLM call separates system instructions from user-provided content, or implement specific escaping/filtering for LLM inputs. | LLM | SKILL.md:346 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/datadrivenconstruction/toolbox-talk-generator/SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/5a880152062f02ac)
Powered by SkillShield