Security Audit
hamelsmu/evals-skills:skills/generate-synthetic-data
github.com/hamelsmu/evals-skillsTrust Assessment
hamelsmu/evals-skills:skills/generate-synthetic-data received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 0 high, 2 medium, and 0 low severity. Key findings include Prompt Injection Vulnerability in LLM Tuple Generation Template, Prompt Injection Vulnerability in LLM Query Generation Template.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on March 20, 2026 (commit febdb335). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Prompt Injection Vulnerability in LLM Tuple Generation Template The skill defines a prompt template for 'Step 3: Generate More Tuples with an LLM'. This template includes placeholders such as `{your application description}`, `{description}`, and `{values}`. If an orchestrating agent or user directly interpolates untrusted input into these placeholders without proper sanitization or escaping, a malicious actor could inject adversarial instructions into the downstream LLM. This could lead to the generation of unintended or malicious synthetic tuples, or cause the LLM to deviate from its intended task during the synthetic data generation process. Implement robust input sanitization and validation for all user-provided values that are interpolated into LLM prompts. Consider using structured prompting techniques (e.g., JSON mode, function calling) to clearly separate instructions from data, or employ prompt templating libraries that offer built-in injection defenses. Ensure that the LLM generating the synthetic data operates with minimal permissions and access to sensitive information. | LLM | SKILL.md:50 | |
| MEDIUM | Prompt Injection Vulnerability in LLM Query Generation Template The skill defines a prompt template for 'Step 4: Convert Each Tuple to a Natural Language Query'. This template includes placeholders such as `{your application}`, `{Brief description of what it does.}`, `{value}`, and `{one of your hand-written examples}`. If an orchestrating agent or user directly interpolates untrusted input into these placeholders without proper sanitization or escaping, a malicious actor could inject adversarial instructions into the downstream LLM. This could lead to the generation of unintended or malicious natural language queries, or cause the LLM to deviate from its intended task. Implement robust input sanitization and validation for all user-provided values that are interpolated into LLM prompts. Consider using structured prompting techniques (e.g., JSON mode, function calling) to clearly separate instructions from data, or employ prompt templating libraries that offer built-in injection defenses. Ensure that the LLM generating the synthetic data operates with minimal permissions and access to sensitive information. | LLM | SKILL.md:70 |
Scan History
Embed Code
[](https://skillshield.io/report/5b28f1e1dde9e111)
Powered by SkillShield