Security Audit
nx-workspace-patterns
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
nx-workspace-patterns received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 0 high, 0 medium, and 1 low severity. Key findings include Instruction to access internal file via prompt.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| LOW | Instruction to access internal file via prompt The skill's instructions include a directive for the host LLM to 'open' a specific file (`resources/implementation-playbook.md`). This is an instruction embedded in untrusted content that attempts to manipulate the host LLM into performing a file system operation. While the target file is internal to the skill package and likely benign, this pattern represents a prompt injection attempt. If the LLM's file access capabilities are not strictly sandboxed or if the path could be manipulated (e.g., through user input), this could lead to data exfiltration or unauthorized file access. Rephrase instructions to avoid direct commands to the host LLM for file operations. Instead, describe the content of the file or suggest the user manually consult it. If the skill *needs* to access this file, it should do so via a defined tool or API, not a direct LLM instruction. | LLM | SKILL.md:20 |
Scan History
Embed Code
[](https://skillshield.io/report/2c0038e298e80fe7)
Powered by SkillShield