Security Audit
documentation-lookup
github.com/affaan-m/everything-claude-codeTrust Assessment
documentation-lookup received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 0 critical, 0 high, 1 medium, and 0 low severity. Key findings include User input passed to external service without guaranteed redaction.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on March 20, 2026 (commit 9a478ad6). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | User input passed to external service without guaranteed redaction The skill instructs the LLM to pass the user's full question (`query`) directly to external tools (`resolve-library-id` and `query-docs`) which interact with a 'Context7 MCP server'. While the skill includes a 'Best Practice' to 'Redact API keys, passwords, tokens, and other secrets from any query sent to Context7,' this redaction relies solely on the LLM's ability and adherence to this instruction. A failure in LLM redaction could lead to sensitive user data, if present in the user's query, being exfiltrated to the third-party Context7 service. Implement a more robust, programmatic redaction or sanitization layer for user input before it is passed to external tools, rather than relying solely on LLM instructions. Alternatively, ensure the Context7 MCP is explicitly designed to handle and discard sensitive information, or that the tool definitions themselves enforce input sanitization. | LLM | SKILL.md:94 |
Scan History
Embed Code
[](https://skillshield.io/report/1c5c1322a7fd4d68)
Powered by SkillShield