Security Audit
database-design
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
database-design received a trust score of 82/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 1 high, 1 medium, and 0 low severity. Key findings include Potential Prompt Injection via Untrusted File Content, Unrestricted File System Access via `project_path` Argument.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Potential Prompt Injection via Untrusted File Content The `schema_validator.py` script reads content from untrusted schema files (e.g., `schema.prisma`) and incorporates parts of this content (like model names or enum names) directly into its output messages. If a malicious schema file is processed, specially crafted strings within model or enum names could be included in the script's stdout, which is then fed back to the host LLM. This could lead to prompt injection, allowing an attacker to manipulate the LLM's subsequent behavior or extract sensitive information. Sanitize or escape any untrusted input (e.g., model names, enum names) before incorporating it into output messages that will be processed by an LLM. Consider using a dedicated sanitization library or encoding mechanism to prevent prompt injection payloads from being interpreted as instructions by the LLM. | LLM | scripts/schema_validator.py:60 | |
| MEDIUM | Unrestricted File System Access via `project_path` Argument The `schema_validator.py` script takes a `project_path` argument directly from `sys.argv[1]` without sufficient validation or restriction. This path is then used with `Path.glob('**/...')` to search for and read schema files. Given the declared `Read` and `Glob` permissions, an attacker could potentially provide a `project_path` pointing to sensitive directories (e.g., `/`, `/etc`, `/var/log`) to search for and read files matching the glob patterns (`prisma/schema.prisma`, `drizzle/*.ts`). While the glob patterns are specific, if an attacker can place a file with a matching name in a sensitive location, its content could be exfiltrated via the script's output. Restrict the `project_path` argument to a safe, sandboxed directory, ideally within the skill's own workspace or a user-defined project directory. Implement strict path validation to prevent directory traversal attacks (e.g., `../`). Ensure the agent's execution environment enforces strict sandboxing for file system operations. | LLM | scripts/schema_validator.py:50 |
Scan History
Embed Code
[](https://skillshield.io/report/46f76491a02f5890)
Powered by SkillShield