Security Audit
confluence-automation
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
confluence-automation received a trust score of 67/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 0 critical, 2 high, 1 medium, and 0 low severity. Key findings include LLM can be prompted to exfiltrate sensitive Confluence data, LLM can be prompted to inject malicious XHTML/JS into Confluence pages, Skill grants broad Confluence management capabilities.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 63/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | LLM can be prompted to exfiltrate sensitive Confluence data The skill exposes powerful Confluence search and content retrieval tools (`CONFLUENCE_SEARCH_CONTENT`, `CONFLUENCE_CQL_SEARCH`, `CONFLUENCE_GET_PAGE_BY_ID`). An attacker could craft a prompt to the LLM instructing it to search for sensitive keywords (e.g., "passwords", "secrets", "confidential data") using `CONFLUENCE_CQL_SEARCH` with the `expand` parameter to retrieve full page bodies, and then exfiltrate this information. This represents a significant data exfiltration risk if the LLM's tool use is not adequately constrained. Implement strict input validation and sanitization for user-provided search queries and parameters before passing them to Confluence tools. Limit the LLM's ability to construct arbitrary `cql` queries or `expand` parameters. Consider restricting the LLM's access to `CONFLUENCE_GET_PAGE_BY_ID` for content it did not explicitly create or was not explicitly authorized to retrieve. | LLM | SKILL.md:69 | |
| HIGH | LLM can be prompted to inject malicious XHTML/JS into Confluence pages The skill provides tools to create and update Confluence pages (`CONFLUENCE_CREATE_PAGE`, `CONFLUENCE_UPDATE_PAGE`). These tools accept page content in Confluence storage format (XHTML) via parameters like `body.storage.value`. If an attacker can craft a prompt that causes the LLM to insert untrusted, unsanitized user input containing malicious XHTML or JavaScript into these parameters, it could lead to Cross-Site Scripting (XSS) vulnerabilities on Confluence pages, affecting users who view the compromised pages. Implement robust sanitization of all user-provided content before it is passed to `CONFLUENCE_CREATE_PAGE` or `CONFLUENCE_UPDATE_PAGE`. Ensure that only safe XHTML tags and attributes are allowed, and strip any potentially malicious scripts or event handlers. The LLM should be explicitly instructed and constrained to sanitize user input for these parameters. | LLM | SKILL.md:45 | |
| MEDIUM | Skill grants broad Confluence management capabilities The skill provides access to a wide range of Confluence operations, including creating, updating, and deleting pages, comprehensive content search, space management, and label manipulation. While these permissions are necessary for the skill's intended functionality, they represent a significant attack surface. If the LLM's interaction with users is not sufficiently secured against prompt injection, these broad permissions could be abused to perform unauthorized actions such as mass deletion of content, creation of spam, or widespread data exfiltration. Implement fine-grained access control where possible, limiting the LLM's ability to perform highly destructive or sensitive actions without explicit, multi-factor confirmation from the user. Ensure the LLM's internal reasoning and tool selection are robustly protected against manipulation. Consider using a "human-in-the-loop" for critical operations like page deletion or mass updates. | LLM | SKILL.md:1 |
Scan History
Embed Code
[](https://skillshield.io/report/3d48b404b5a6ce18)
Powered by SkillShield