Security Audit
internal-comms
github.com/sickn33/antigravity-awesome-skillsTrust Assessment
internal-comms received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Skill delegates instructions to external files, enabling prompt injection.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Skill delegates instructions to external files, enabling prompt injection The skill instructs the host LLM to load and follow instructions from external Markdown files (e.g., `examples/3p-updates.md`). This creates a prompt injection vulnerability. If these external files are compromised, contain malicious instructions, or are not thoroughly vetted, they can be used to inject arbitrary prompts into the LLM's context, overriding its behavior, extracting sensitive information, or leading to unintended actions. The skill explicitly directs the LLM to 'Follow the specific instructions in that file'. 1. **Strictly vet all external files:** Ensure that all files loaded by the skill (e.g., `examples/*.md`) are thoroughly reviewed for malicious instructions or prompt injection attempts. 2. **Sandbox file access:** Limit the LLM's file system access to only the necessary files and directories, and ensure these directories are read-only for the LLM. 3. **Avoid delegating full instruction execution:** Instead of instructing the LLM to 'Follow the specific instructions', consider parsing specific, limited directives from the external files, or using a more structured data format that prevents arbitrary instruction injection. 4. **Implement content filtering:** If possible, apply content filtering or sanitization to the loaded instructions before they are passed to the LLM. | LLM | SKILL.md:19 |
Scan History
Embed Code
[](https://skillshield.io/report/6acfb8a368d38520)
Powered by SkillShield