Security Audit
Jamkris/everything-gemini-code:skills/data-scraper-agent
github.com/Jamkris/everything-gemini-codeTrust Assessment
Jamkris/everything-gemini-code:skills/data-scraper-agent received a trust score of 48/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 3 findings: 0 critical, 1 high, 1 medium, and 1 low severity. Key findings include LLM Prompt Injection via User-Controlled Context and Feedback, Unpinned Dependencies in requirements.txt, Broad GitHub Actions 'contents: write' Permission.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on March 30, 2026 (commit 6c6f43aa). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | LLM Prompt Injection via User-Controlled Context and Feedback The AI pipeline constructs prompts for the Gemini LLM by directly incorporating user-controlled content from `profile/context.md` (user context) and `data/feedback.json` (user feedback history). A malicious user could craft these files to inject instructions into the LLM prompt, potentially manipulating the LLM's behavior (e.g., to ignore previous instructions, reveal internal data, misclassify items, or generate harmful content in summaries/notes). While the prompt attempts to constrain the LLM's output format, sophisticated prompt injection attacks could still subvert the LLM's intended function. Implement robust input sanitization and validation for content loaded from `profile/context.md` and `data/feedback.json` before it is included in the LLM prompt. Consider using LLM guardrails or output validation to detect and reject malicious or out-of-format LLM responses. Clearly document the risks of injecting untrusted content into these files for users. | LLM | SKILL.md:204 | |
| MEDIUM | Unpinned Dependencies in requirements.txt The GitHub Actions workflow uses `pip install -r requirements.txt` to install dependencies. The `requirements.txt` file is not provided in the context, but if it contains unpinned dependencies (e.g., `requests` instead of `requests==2.28.1`), the project could be vulnerable to supply chain attacks such as dependency confusion or malicious package updates. An attacker could publish a malicious package with the same name as an unpinned dependency, leading to its installation and execution. Ensure all dependencies in `requirements.txt` are pinned to exact versions (e.g., `package_name==1.2.3`). Use a dependency management tool that enforces pinning or regularly audit `requirements.txt` for unpinned dependencies. | Static | SKILL.md:320 | |
| LOW | Broad GitHub Actions 'contents: write' Permission The GitHub Actions workflow requests `permissions: contents: write`. While this permission is explicitly used for committing `data/feedback.json` (`git add data/feedback.json`), it grants broad write access to the entire repository. If a separate command injection or arbitrary file write vulnerability were to exist within the Python script, this permission could be leveraged to modify other sensitive files in the repository (e.g., workflow files, source code), potentially leading to repository compromise. The current usage is constrained, but the permission itself is powerful. Thoroughly audit all code paths that handle user-controlled input to prevent any form of command injection or arbitrary file writes. If possible, consider using more granular permissions if GitHub Actions introduces them for specific file paths or directories. Ensure that the `data/feedback.json` file cannot be crafted to contain executable code or trigger unintended actions upon commit. | Static | SKILL.md:310 |
Scan History
Embed Code
[](https://skillshield.io/report/b1bc9e736495f56b)
Powered by SkillShield