Trust Assessment
arb-injection received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 1 critical, 2 high, 0 medium, and 0 low severity. Key findings include Unpinned dependencies and dynamic code updates introduce supply chain risk, Potential data exfiltration via external LLM for 'deep analysis', Path traversal vulnerability risk with `BYBOB_OUTPUT` configuration.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Unpinned dependencies and dynamic code updates introduce supply chain risk The skill instructs users to `git clone` and `npm install` during setup, and to perform `git pull origin main` and `npm install` daily for maintenance. `git pull origin main` fetches the latest code from the `main` branch without pinning to a specific commit or tag. This means the skill's underlying code can change daily without explicit review, introducing new vulnerabilities or malicious code. `npm install` without a `package-lock.json` or pinned versions can also pull unvetted or malicious packages. The scheduled task description explicitly reinforces this daily update mechanism. This constitutes a critical supply chain risk. Pin all external code dependencies to specific commit hashes or immutable tags. Use `npm ci` with a committed `package-lock.json` to ensure reproducible builds. Implement a secure update mechanism that requires explicit review and approval for code changes. | LLM | SKILL.md:20 | |
| HIGH | Potential data exfiltration via external LLM for 'deep analysis' The skill mentions using `ANTHROPIC_API_KEY` for 'LLM deep analysis'. This implies that the skill's findings or analysis results will be sent to an external LLM service (Anthropic). While the raw contract data being analyzed is public, the generated 'findings' or 'deep analysis' might contain proprietary insights, vulnerability details, or other information that a user might consider sensitive. Sending such data to a third-party service without explicit user consent or clear data handling policies constitutes a potential data exfiltration risk. Clearly document what specific data is sent to the external LLM, why it's necessary, and provide options for users to opt-out or configure data privacy settings. Ensure sensitive information is redacted or anonymized before transmission. | LLM | SKILL.md:69 | |
| HIGH | Path traversal vulnerability risk with `BYBOB_OUTPUT` configuration The skill allows configuring `BYBOB_OUTPUT` in the `.env` file to override the default results directory. If the underlying `arb-injection` script does not properly sanitize this user-provided path, it could be vulnerable to path traversal attacks. An attacker (or a malicious configuration) could set `BYBOB_OUTPUT` to a path like `/../../etc/` or `/tmp/` to write files to arbitrary locations on the system, potentially overwriting critical system files, creating malicious executables, or exfiltrating data by writing to publicly accessible directories. This grants excessive write permissions. Ensure that the underlying script strictly validates and sanitizes the `BYBOB_OUTPUT` path to prevent directory traversal. Restrict output paths to a designated, sandboxed directory. Consider using a UUID or hash for output subdirectories to prevent collisions and unauthorized access. | LLM | SKILL.md:70 |
Scan History
Embed Code
[](https://skillshield.io/report/bbdb03325c8353e2)
Powered by SkillShield