Trust Assessment
validation-rules-builder received a trust score of 58/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Missing required field: name, Arbitrary code execution via custom validation functions, Untrusted regex patterns can lead to ReDoS.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary code execution via custom validation functions The `add_custom_rule` method in `ValidationRulesBuilder` accepts an arbitrary Python `Callable` object. This `custom_func` is later executed directly by the `_apply_rule` method with input data. If an AI agent using this skill allows untrusted user input to define or influence the `func` argument for `add_custom_rule`, it could lead to arbitrary code execution within the agent's environment, bypassing security controls and potentially compromising the host system. 1. **Restrict `Callable` scope**: If custom functions are necessary, consider sandboxing their execution environment (e.g., using a restricted execution environment or a secure sandbox library). 2. **Input validation**: Ensure that any `Callable` passed to `add_custom_rule` originates from trusted sources and is not generated directly from untrusted user input by the LLM. 3. **Alternative design**: Re-evaluate if arbitrary Python functions are truly necessary for validation. Can the required custom logic be expressed using a more constrained, declarative language or a limited set of pre-defined functions? | LLM | SKILL.md:170 | |
| HIGH | Untrusted regex patterns can lead to ReDoS The `add_regex_rule` and `add_pattern` methods allow arbitrary regular expression strings to be provided. These patterns are later used with `re.match`. If an AI agent using this skill allows untrusted user input to define these patterns, a malicious regex (e.g., `(a+)+b`) could be supplied, leading to a Regular Expression Denial of Service (ReDoS) attack when validating certain input strings. This can consume excessive CPU resources and make the agent unresponsive. 1. **Sanitize/Validate Regex**: If user-defined regex patterns are necessary, implement strict validation to prevent known ReDoS patterns. Consider using a library that can analyze regex complexity or limit the allowed regex features. 2. **Limit Regex Source**: Ensure that regex patterns are only sourced from trusted, pre-defined lists or are carefully reviewed before being added by an LLM. 3. **Resource Limits**: Implement resource limits (e.g., CPU time limits) for regex matching operations if possible, to mitigate the impact of a ReDoS attack. | LLM | SKILL.md:120 | |
| MEDIUM | Missing required field: name The 'name' field is required for claude_code skills but is missing from frontmatter. Add a 'name' field to the SKILL.md frontmatter. | Static | skills/datadrivenconstruction/validation-rules-builder/SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/3e58d990838c8eab)
Powered by SkillShield