Security Audit
citation-management
github.com/davila7/claude-code-templatesTrust Assessment
citation-management received a trust score of 12/100, placing it in the Untrusted category. This skill has significant security findings that require attention before use in production.
SkillShield's automated analysis identified 23 findings: 11 critical, 2 high, 9 medium, and 1 low severity. Key findings include Dangerous tool allowed: Bash, Suspicious import: requests, Network egress to untrusted endpoints.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 0/100, indicating areas for improvement.
Last analyzed on February 12, 2026 (commit 458b1186). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings23
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Arbitrary File Read/Write via User-Controlled Paths Multiple Python scripts accept user-controlled file paths via `--input`, `--output`, or `--report` arguments. With the declared 'Read' and 'Write' permissions, an attacker can specify arbitrary file paths (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, `../../sensitive_data.txt`) to read sensitive system files or write to arbitrary locations, leading to data exfiltration, privilege escalation, or denial of service. Implement path sanitization to restrict file operations to a designated, sandboxed directory. For example, use `os.path.abspath` combined with `os.path.commonprefix` or `pathlib.Path.resolve()` to ensure paths are within an allowed directory. Alternatively, use a file picker tool that returns file content rather than raw paths. | LLM | scripts/doi_to_bibtex.py:120 | |
| CRITICAL | Arbitrary File Read/Write via User-Controlled Paths Multiple Python scripts accept user-controlled file paths via `--input`, `--output`, or `--report` arguments. With the declared 'Read' and 'Write' permissions, an attacker can specify arbitrary file paths (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, `../../sensitive_data.txt`) to read sensitive system files or write to arbitrary locations, leading to data exfiltration, privilege escalation, or denial of service. Implement path sanitization to restrict file operations to a designated, sandboxed directory. For example, use `os.path.abspath` combined with `os.path.commonprefix` or `pathlib.Path.resolve()` to ensure paths are within an allowed directory. Alternatively, use a file picker tool that returns file content rather than raw paths. | LLM | scripts/doi_to_bibtex.py:156 | |
| CRITICAL | Arbitrary File Read/Write via User-Controlled Paths Multiple Python scripts accept user-controlled file paths via `--input`, `--output`, or `--report` arguments. With the declared 'Read' and 'Write' permissions, an attacker can specify arbitrary file paths (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, `../../sensitive_data.txt`) to read sensitive system files or write to arbitrary locations, leading to data exfiltration, privilege escalation, or denial of service. Implement path sanitization to restrict file operations to a designated, sandboxed directory. For example, use `os.path.abspath` combined with `os.path.commonprefix` or `pathlib.Path.resolve()` to ensure paths are within an allowed directory. Alternatively, use a file picker tool that returns file content rather than raw paths. | LLM | scripts/extract_metadata.py:290 | |
| CRITICAL | Arbitrary File Read/Write via User-Controlled Paths Multiple Python scripts accept user-controlled file paths via `--input`, `--output`, or `--report` arguments. With the declared 'Read' and 'Write' permissions, an attacker can specify arbitrary file paths (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, `../../sensitive_data.txt`) to read sensitive system files or write to arbitrary locations, leading to data exfiltration, privilege escalation, or denial of service. Implement path sanitization to restrict file operations to a designated, sandboxed directory. For example, use `os.path.abspath` combined with `os.path.commonprefix` or `pathlib.Path.resolve()` to ensure paths are within an allowed directory. Alternatively, use a file picker tool that returns file content rather than raw paths. | LLM | scripts/extract_metadata.py:326 | |
| CRITICAL | Arbitrary File Read/Write via User-Controlled Paths Multiple Python scripts accept user-controlled file paths via `--input`, `--output`, or `--report` arguments. With the declared 'Read' and 'Write' permissions, an attacker can specify arbitrary file paths (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, `../../sensitive_data.txt`) to read sensitive system files or write to arbitrary locations, leading to data exfiltration, privilege escalation, or denial of service. Implement path sanitization to restrict file operations to a designated, sandboxed directory. For example, use `os.path.abspath` combined with `os.path.commonprefix` or `pathlib.Path.resolve()` to ensure paths are within an allowed directory. Alternatively, use a file picker tool that returns file content rather than raw paths. | LLM | scripts/format_bibtex.py:23 | |
| CRITICAL | Arbitrary File Read/Write via User-Controlled Paths Multiple Python scripts accept user-controlled file paths via `--input`, `--output`, or `--report` arguments. With the declared 'Read' and 'Write' permissions, an attacker can specify arbitrary file paths (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, `../../sensitive_data.txt`) to read sensitive system files or write to arbitrary locations, leading to data exfiltration, privilege escalation, or denial of service. Implement path sanitization to restrict file operations to a designated, sandboxed directory. For example, use `os.path.abspath` combined with `os.path.commonprefix` or `pathlib.Path.resolve()` to ensure paths are within an allowed directory. Alternatively, use a file picker tool that returns file content rather than raw paths. | LLM | scripts/format_bibtex.py:269 | |
| CRITICAL | Arbitrary File Read/Write via User-Controlled Paths Multiple Python scripts accept user-controlled file paths via `--input`, `--output`, or `--report` arguments. With the declared 'Read' and 'Write' permissions, an attacker can specify arbitrary file paths (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, `../../sensitive_data.txt`) to read sensitive system files or write to arbitrary locations, leading to data exfiltration, privilege escalation, or denial of service. Implement path sanitization to restrict file operations to a designated, sandboxed directory. For example, use `os.path.abspath` combined with `os.path.commonprefix` or `pathlib.Path.resolve()` to ensure paths are within an allowed directory. Alternatively, use a file picker tool that returns file content rather than raw paths. | LLM | scripts/search_google_scholar.py:156 | |
| CRITICAL | Arbitrary File Read/Write via User-Controlled Paths Multiple Python scripts accept user-controlled file paths via `--input`, `--output`, or `--report` arguments. With the declared 'Read' and 'Write' permissions, an attacker can specify arbitrary file paths (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, `../../sensitive_data.txt`) to read sensitive system files or write to arbitrary locations, leading to data exfiltration, privilege escalation, or denial of service. Implement path sanitization to restrict file operations to a designated, sandboxed directory. For example, use `os.path.abspath` combined with `os.path.commonprefix` or `pathlib.Path.resolve()` to ensure paths are within an allowed directory. Alternatively, use a file picker tool that returns file content rather than raw paths. | LLM | scripts/search_pubmed.py:204 | |
| CRITICAL | Arbitrary File Read/Write via User-Controlled Paths Multiple Python scripts accept user-controlled file paths via `--input`, `--output`, or `--report` arguments. With the declared 'Read' and 'Write' permissions, an attacker can specify arbitrary file paths (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, `../../sensitive_data.txt`) to read sensitive system files or write to arbitrary locations, leading to data exfiltration, privilege escalation, or denial of service. Implement path sanitization to restrict file operations to a designated, sandboxed directory. For example, use `os.path.abspath` combined with `os.path.commonprefix` or `pathlib.Path.resolve()` to ensure paths are within an allowed directory. Alternatively, use a file picker tool that returns file content rather than raw paths. | LLM | scripts/validate_citations.py:23 | |
| CRITICAL | Arbitrary File Read/Write via User-Controlled Paths Multiple Python scripts accept user-controlled file paths via `--input`, `--output`, or `--report` arguments. With the declared 'Read' and 'Write' permissions, an attacker can specify arbitrary file paths (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, `../../sensitive_data.txt`) to read sensitive system files or write to arbitrary locations, leading to data exfiltration, privilege escalation, or denial of service. Implement path sanitization to restrict file operations to a designated, sandboxed directory. For example, use `os.path.abspath` combined with `os.path.commonprefix` or `pathlib.Path.resolve()` to ensure paths are within an allowed directory. Alternatively, use a file picker tool that returns file content rather than raw paths. | LLM | scripts/validate_citations.py:290 | |
| CRITICAL | Arbitrary File Read/Write via User-Controlled Paths Multiple Python scripts accept user-controlled file paths via `--input`, `--output`, or `--report` arguments. With the declared 'Read' and 'Write' permissions, an attacker can specify arbitrary file paths (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, `../../sensitive_data.txt`) to read sensitive system files or write to arbitrary locations, leading to data exfiltration, privilege escalation, or denial of service. Implement path sanitization to restrict file operations to a designated, sandboxed directory. For example, use `os.path.abspath` combined with `os.path.commonprefix` or `pathlib.Path.resolve()` to ensure paths are within an allowed directory. Alternatively, use a file picker tool that returns file content rather than raw paths. | LLM | scripts/validate_citations.py:296 | |
| HIGH | Dangerous tool allowed: Bash The skill allows the 'Bash' tool without constraints. This grants arbitrary command execution. Remove unconstrained shell/exec tools from allowed-tools, or add specific command constraints. | Static | cli-tool/components/skills/scientific/citation-management/SKILL.md:1 | |
| HIGH | Unpinned Third-Party Dependency (scholarly) The `search_google_scholar.py` script imports the `scholarly` library without specifying a version. This makes the skill vulnerable to supply chain attacks, where a malicious update to the `scholarly` package could introduce vulnerabilities or backdoors. It also risks breaking functionality due to incompatible API changes. Pin the `scholarly` dependency to a specific, known-good version in a `requirements.txt` file or similar dependency management system. For example, `scholarly==1.2.3`. Regularly review and update dependencies. | LLM | scripts/search_google_scholar.py:10 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | cli-tool/components/skills/scientific/citation-management/scripts/doi_to_bibtex.py:8 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | cli-tool/components/skills/scientific/citation-management/scripts/extract_metadata.py:9 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | cli-tool/components/skills/scientific/citation-management/scripts/search_pubmed.py:9 | |
| MEDIUM | Suspicious import: requests Import of 'requests' detected. This module provides network or low-level system access. Verify this import is necessary. Network and system modules in skill code may indicate data exfiltration. | Static | cli-tool/components/skills/scientific/citation-management/scripts/validate_citations.py:9 | |
| MEDIUM | Network egress to untrusted endpoints HTTP request to raw IP address Review all outbound network calls. Remove connections to webhook collectors, paste sites, and raw IP addresses. Legitimate API calls should use well-known service domains. | Manifest | cli-tool/components/mcps/devtools/figma-dev-mode.json:4 | |
| MEDIUM | Access to Sensitive Environment Variables (NCBI API Key/Email) The `extract_metadata.py` and `search_pubmed.py` scripts access `NCBI_API_KEY` and `NCBI_EMAIL` from environment variables. While used for legitimate API calls, the skill's ability to read these sensitive credentials from the environment poses a risk. If the skill's code were compromised or modified, these credentials could be exfiltrated or misused. Minimize direct access to sensitive environment variables within skill code. If possible, pass necessary credentials as explicit, ephemeral arguments from a secure orchestrator rather than allowing the skill to read them directly from its environment. Ensure that API keys are scoped to the minimum necessary permissions. | LLM | scripts/extract_metadata.py:30 | |
| MEDIUM | Access to Sensitive Environment Variables (NCBI API Key/Email) The `extract_metadata.py` and `search_pubmed.py` scripts access `NCBI_API_KEY` and `NCBI_EMAIL` from environment variables. While used for legitimate API calls, the skill's ability to read these sensitive credentials from the environment poses a risk. If the skill's code were compromised or modified, these credentials could be exfiltrated or misused. Minimize direct access to sensitive environment variables within skill code. If possible, pass necessary credentials as explicit, ephemeral arguments from a secure orchestrator rather than allowing the skill to read them directly from its environment. Ensure that API keys are scoped to the minimum necessary permissions. | LLM | scripts/extract_metadata.py:33 | |
| MEDIUM | Access to Sensitive Environment Variables (NCBI API Key/Email) The `extract_metadata.py` and `search_pubmed.py` scripts access `NCBI_API_KEY` and `NCBI_EMAIL` from environment variables. While used for legitimate API calls, the skill's ability to read these sensitive credentials from the environment poses a risk. If the skill's code were compromised or modified, these credentials could be exfiltrated or misused. Minimize direct access to sensitive environment variables within skill code. If possible, pass necessary credentials as explicit, ephemeral arguments from a secure orchestrator rather than allowing the skill to read them directly from its environment. Ensure that API keys are scoped to the minimum necessary permissions. | LLM | scripts/search_pubmed.py:20 | |
| MEDIUM | Access to Sensitive Environment Variables (NCBI API Key/Email) The `extract_metadata.py` and `search_pubmed.py` scripts access `NCBI_API_KEY` and `NCBI_EMAIL` from environment variables. While used for legitimate API calls, the skill's ability to read these sensitive credentials from the environment poses a risk. If the skill's code were compromised or modified, these credentials could be exfiltrated or misused. Minimize direct access to sensitive environment variables within skill code. If possible, pass necessary credentials as explicit, ephemeral arguments from a secure orchestrator rather than allowing the skill to read them directly from its environment. Ensure that API keys are scoped to the minimum necessary permissions. | LLM | scripts/search_pubmed.py:21 | |
| LOW | Covert behavior / concealment directives Multiple zero-width characters (stealth text) Remove hidden instructions, zero-width characters, and bidirectional overrides. Skill instructions should be fully visible and transparent to users. | Manifest | cli-tool/components/mcps/devtools/jfrog.json:4 |
Scan History
Embed Code
[](https://skillshield.io/report/4aead391cb472251)
Powered by SkillShield