Security Audit
deployment-validation-config-validate
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
deployment-validation-config-validate received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Data Exfiltration via arbitrary file reading in ConfigurationAnalyzer, Potential Command Injection/Data Exfiltration via undefined `loadConfig` method.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Potential Command Injection/Data Exfiltration via undefined `loadConfig` method The `RuntimeConfigValidator` class calls `this.loadConfig(configPath)` within its `loadAndValidate` method, where `configPath` is an untrusted input to the `initialize` method. The `loadConfig` method is not defined in the provided snippet. If `loadConfig` were to dynamically load or execute code from the `configPath` (e.g., using `require(configPath)` for a JavaScript file), an attacker controlling `configPath` could achieve arbitrary code execution (Command Injection). Even if it only reads the file content (e.g., `fs.readFileSync`), it still poses a significant Data Exfiltration risk, allowing an attacker to read arbitrary files on the system. Define the `loadConfig` method. Ensure it uses safe parsing methods (e.g., `JSON.parse` for JSON, `yaml.safeLoad` for YAML) and strictly validates the file extension and content type. Avoid dynamic code execution mechanisms like `require()` or `eval()` with untrusted inputs. Implement strict input validation for `configPath` to prevent path traversal attacks and restrict it to a safe, predefined directory. | LLM | SKILL.md:204 | |
| HIGH | Data Exfiltration via arbitrary file reading in ConfigurationAnalyzer The `ConfigurationAnalyzer` class's `_find_config_files` and `_check_security_issues` methods read the content of files located under the `project_path` argument. If an attacker can control the `project_path` provided to the `analyze_project` method, they could instruct the skill to read the content of arbitrary files within that directory and its subdirectories. The `_check_security_issues` method specifically scans these files for secret patterns (e.g., API keys, passwords), making it a direct vector for data exfiltration if pointed to sensitive locations. The skill's output, which includes `security_issues`, could then reveal the presence of such secrets. Implement strict input validation and sanitization for `project_path`. Ensure the skill operates within a tightly sandboxed environment with minimal file system access. Restrict `project_path` to a predefined, non-sensitive directory or use an allow-list approach for file types and paths. Avoid returning raw file content or sensitive findings directly without further sanitization or redaction. | LLM | SKILL.md:73 |
Scan History
Embed Code
[](https://skillshield.io/report/5f87dfdc79a08d90)
Powered by SkillShield