Security Audit
production-code-audit
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
production-code-audit received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 1 high, 1 medium, and 0 low severity. Key findings include Excessive filesystem read/write permissions requested, Potential for data exfiltration due to broad file access, Implied command execution without explicit sandboxing.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 48/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Excessive filesystem read/write permissions requested The skill explicitly instructs the AI agent to perform broad filesystem operations, including 'Read all files - Scan every file in the project recursively', 'Use `readFile` to read every source file', 'Use `strReplace` to fix issues in files', and 'Add missing files'. This grants the skill full read and write access to the entire project directory. Such extensive permissions are highly dangerous, as a compromised or malicious skill could read sensitive data (e.g., credentials, PII), modify critical code, or delete files, leading to data exfiltration, integrity breaches, or denial of service. Restrict the skill's filesystem access to only the necessary files and directories. Implement granular permissions (e.g., read-only for discovery, specific file paths for modification) rather than blanket recursive access. Require explicit user confirmation for any write operations or sensitive file access. | LLM | SKILL.md:40 | |
| HIGH | Potential for data exfiltration due to broad file access The skill's core functionality involves 'Read all files - Scan every file in the project recursively' and 'Use `readFile` to read every source file'. While the stated purpose is to audit and fix code, this capability inherently allows the AI agent to access and process all data within the repository, including sensitive information like API keys, database credentials, personal identifiable information (PII), and proprietary business logic. If the AI agent or the skill's execution environment were compromised, this broad access could be exploited to exfiltrate sensitive data. Implement strict data handling policies. Ensure that sensitive data, once read, is processed securely and not stored or transmitted unnecessarily. Consider redacting or masking sensitive information before it is processed by the AI. Limit the scope of files the skill can read to only those strictly necessary for its function, excluding configuration files, `.env` files, or data files containing PII. | LLM | SKILL.md:40 | |
| MEDIUM | Implied command execution without explicit sandboxing The skill describes actions such as 'Run all tests to ensure nothing broke' and 'Add CI/CD pipeline (.github/workflows)'. These actions strongly imply the execution of shell commands or external processes (e.g., test runners, git commands, CI/CD tooling). Without explicit mention of robust sandboxing or command validation mechanisms, there is a risk of command injection if user-controlled input or compromised code influences the commands being executed. An attacker could potentially inject malicious commands to gain control of the execution environment. Ensure that any underlying command execution is performed within a strictly sandboxed environment. All inputs to commands must be thoroughly validated and sanitized to prevent injection. Prefer using dedicated API calls or libraries over direct shell execution where possible. If shell execution is unavoidable, use parameterized commands and avoid interpolating untrusted input directly into command strings. | LLM | SKILL.md:139 |
Scan History
Embed Code
[](https://skillshield.io/report/ea69a3fc96adbe8c)
Powered by SkillShield