Trust Assessment
jira-ai received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 4 findings: 0 critical, 1 high, 2 medium, and 1 low severity. Key findings include Skill enables exfiltration of local files via Jira comments, Potential command injection via `confl get <url>` parameter, Unpinned dependency in installation instructions.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 69/100, indicating areas for improvement.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings4
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Skill enables exfiltration of local files via Jira comments The `jira-ai issue comment` command supports a `--file` argument, allowing the content of any local file to be attached as a comment to a Jira issue. An attacker could craft a prompt to an LLM agent to read sensitive files (e.g., `/etc/passwd`, `~/.ssh/id_rsa`, `.env` files) and exfiltrate their contents by posting them to a Jira issue. This grants the agent excessive filesystem read permissions, posing a significant data exfiltration risk. Implement strict allow-listing for file paths that can be read by the `--file` argument, or disallow reading arbitrary files. If file uploads are necessary, restrict them to a specific, sandboxed directory. Ensure the LLM agent is constrained from providing arbitrary file paths. | LLM | SKILL.md:105 | |
| MEDIUM | Potential command injection via `confl get <url>` parameter The `jira-ai confl get <url>` command takes an arbitrary URL as input. If the underlying implementation of the `jira-ai` tool uses a shell command (e.g., `curl`, `wget`) to fetch the content without proper sanitization or escaping of the URL parameter, a malicious URL could be crafted to execute arbitrary shell commands on the host system. For example, `http://example.com; rm -rf /` could be injected if not properly handled. Ensure that the `jira-ai` tool strictly validates and sanitizes the URL parameter before using it in any internal shell commands or network requests. Prefer using secure, built-in HTTP client libraries over shelling out to external commands. | LLM | SKILL.md:85 | |
| MEDIUM | Unpinned dependency in installation instructions The installation command `npm install -g jira-ai` does not specify a version number for the `jira-ai` package. This means that executing this command will always fetch the latest available version. If a malicious update is published to the `jira-ai` package on npm, or if the package maintainer's account is compromised, the installed skill could become malicious without explicit user action, introducing a supply chain risk. Pin the dependency to a specific, known-good version (e.g., `npm install -g jira-ai@1.0.0`). Regularly review and update pinned versions after security vetting. | LLM | SKILL.md:10 | |
| LOW | Example configuration promotes broad access to Jira/Confluence The provided example configuration for `jira-ai` shows `allowed-jira-projects: - all` and `allowed-commands: - all`. While the tool offers granular access control, this example suggests a default posture of broad permissions. If an LLM agent is deployed with such a configuration, it would have unrestricted access to all Jira projects and commands, increasing the blast radius in case of a prompt injection or other compromise. Advise users to configure the tool with the principle of least privilege, explicitly listing only the necessary projects, commands, and Confluence spaces. The example configuration should ideally demonstrate a more restrictive setup. | LLM | SKILL.md:129 |
Scan History
Embed Code
[](https://skillshield.io/report/790f2e44bcc2895c)
Powered by SkillShield