Security Audit
performance-testing-review-ai-review
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
performance-testing-review-ai-review received a trust score of 79/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 3 findings: 0 critical, 1 high, 1 medium, and 1 low severity. Key findings include Data Exfiltration to Third-Party LLMs, Potential Secret Exfiltration via Secret Scanning Output, Unpinned Python Dependency.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Data Exfiltration to Third-Party LLMs The skill's core functionality involves sending potentially sensitive code diffs, static analysis results (which can contain vulnerability details), and code snippets to external Large Language Models (LLMs) like Anthropic's Claude and OpenAI's GPT. This is explicitly shown in the `ai_review` function and `security_analysis_prompt`. This means that proprietary code, intellectual property, and discovered vulnerabilities are transmitted to third-party services, which may have their own data retention and privacy policies. 1. **Transparency:** Clearly inform users that their code and vulnerability data will be sent to third-party LLM providers. 2. **Data Minimization:** Implement stricter filtering or redaction of highly sensitive information from `diff` and `static_results` before sending to LLMs, if possible without compromising review quality. 3. **Provider Agreements:** Ensure that LLM providers have robust data privacy and security agreements in place, including commitments to not use customer data for model training without explicit consent. 4. **On-premise/Private LLMs:** For highly sensitive environments, consider using self-hosted or private LLM deployments. | LLM | SKILL.md:200 | |
| MEDIUM | Potential Secret Exfiltration via Secret Scanning Output The 'Secret Scanning' section demonstrates using `trufflehog` to detect secrets. While the command pipes output to `jq` to select specific fields, the raw output of `trufflehog` *will* contain discovered secrets. If this raw output, or even the processed output, is subsequently logged, stored insecurely, or sent to another external service (e.g., an LLM for summarization, or a monitoring system), it could lead to the exfiltration of the very secrets it's designed to find. 1. **Secure Handling:** Ensure that the output of secret scanning tools is handled with extreme care. Avoid logging raw output. 2. **Redaction/Masking:** Implement redaction or masking of secret values before displaying or transmitting them, even if only metadata is intended. 3. **Access Control:** Restrict access to any systems or logs that might contain secret scanning results. 4. **Direct Integration:** Prefer direct API integrations with secret management systems over intermediate processing of raw output. | LLM | SKILL.md:149 | |
| LOW | Unpinned Python Dependency The Python `CodeReviewOrchestrator` uses the `anthropic` library. The provided code snippet does not specify a version for this dependency. In a real-world deployment, if `anthropic` is installed without a pinned version (e.g., in a `requirements.txt` file), it could lead to unexpected behavior, breaking changes, or even the introduction of malicious code if a compromised version is published to PyPI. 1. **Pin Dependencies:** Always pin exact versions for all direct and transitive dependencies in `requirements.txt` (e.g., `anthropic==0.20.0`). 2. **Dependency Locking:** Use a dependency locking mechanism (e.g., `pip freeze > requirements.txt` or tools like Poetry/Pipenv) to ensure reproducible builds. 3. **Vulnerability Scanning:** Regularly scan dependencies for known vulnerabilities (e.g., using `pip-audit`, Snyk, Dependabot). | LLM | SKILL.md:195 |
Scan History
Embed Code
[](https://skillshield.io/report/26f436e65168a475)
Powered by SkillShield