Security Audit
unit-testing-test-generate
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
unit-testing-test-generate received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 1 critical, 1 high, 0 medium, and 0 low severity. Key findings include Command Injection via subprocess.run, Arbitrary File Read via open().
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 55/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Command Injection via subprocess.run The `CoverageAnalyzer.analyze_coverage` method directly executes a `test_command` string using `subprocess.run`. If this `test_command` is derived from untrusted user input (e.g., `$ARGUMENTS` as indicated in the skill's requirements), a malicious actor could inject arbitrary shell commands. This could lead to remote code execution, data exfiltration, or system compromise on the host environment. Implement strict input validation and sanitization for the `test_command` argument. Instead of directly executing a user-provided string, consider using a whitelist of allowed commands and arguments, or pass arguments as a list to `subprocess.run` to prevent shell injection. Ensure each argument in the list is also validated to prevent malicious content. | LLM | SKILL.md:178 | |
| HIGH | Arbitrary File Read via open() The `TestGenerator._analyze_python` method directly uses `file_path` in `open(file_path)`. If `file_path` is derived from untrusted user input without proper sanitization or validation, a malicious user could provide a path to sensitive system files (e.g., `/etc/passwd`, `/app/secrets.env`) leading to unauthorized data exfiltration. Implement strict validation for `file_path` to ensure it refers only to files within an allowed, sandboxed directory. Prevent path traversal attacks by resolving the path and checking if it is a subdirectory of the intended working directory or a designated safe zone for user-provided code. | LLM | SKILL.md:63 |
Scan History
Embed Code
[](https://skillshield.io/report/fc836127949d04e1)
Powered by SkillShield