Security Audit
jira-automation
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
jira-automation received a trust score of 73/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 0 critical, 2 high, 0 medium, and 0 low severity. Key findings include Skill grants ability to modify Jira project roles, Broad read access to sensitive Jira data.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Skill grants ability to modify Jira project roles The skill exposes the `JIRA_ADD_USERS_TO_PROJECT_ROLE` tool, which allows adding users to specific project roles within Jira. This capability, if misused by a compromised LLM, could lead to unauthorized privilege escalation or granting access to sensitive projects for malicious actors within the Jira instance. Review if the ability to modify project roles is strictly necessary for the skill's intended purpose. If so, implement stricter access controls for the Jira connection, require explicit user confirmation for such sensitive operations, or limit the scope of roles that can be modified. Consider breaking this functionality into a separate, more restricted skill if possible. | LLM | SKILL.md:140 | |
| HIGH | Broad read access to sensitive Jira data The skill provides tools such as `JIRA_GET_ALL_PROJECTS`, `JIRA_GET_ALL_USERS`, `JIRA_SEARCH_FOR_ISSUES_USING_JQL_POST`, and `JIRA_LIST_ISSUE_COMMENTS`. These tools collectively grant extensive read access to potentially sensitive organizational data, including project details, user information, issue content, and comments. A compromised LLM could be prompted to systematically extract and exfiltrate this data. Implement strict access policies for the Jira connection used by the skill, granting only the minimum necessary read permissions. For highly sensitive data, consider requiring explicit user approval or masking certain fields before displaying them to the LLM or user. Monitor usage patterns for unusual data access requests. | LLM | SKILL.md:137 |
Scan History
Embed Code
[](https://skillshield.io/report/de72c09e3ccaf3c7)
Powered by SkillShield