Security Audit
snyk/agent-scan:tests/skills/internal-comms
github.com/snyk/agent-scanTrust Assessment
snyk/agent-scan:tests/skills/internal-comms received a trust score of 85/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 1 finding: 0 critical, 1 high, 0 medium, and 0 low severity. Key findings include Skill instructs LLM to load and execute instructions from untrusted local files.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on March 1, 2026 (commit 30a672c5). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| HIGH | Skill instructs LLM to load and execute instructions from untrusted local files The skill's `SKILL.md` file, which is treated as untrusted input, instructs the host LLM to load content from specific Markdown files within the `examples/` directory (e.g., `examples/3p-updates.md`, `examples/company-newsletter.md`) and then to 'Follow the specific instructions in that file'. This creates a prompt injection vulnerability where an attacker could embed malicious instructions within these guideline files. Since these guideline files are part of the skill package, they are also considered untrusted from the perspective of the host LLM, allowing for a nested prompt injection attack. 1. Ensure that any files loaded and interpreted as instructions by the LLM are explicitly vetted and marked as trusted system prompts. 2. If the guideline files are intended to be user-editable or part of the untrusted skill, the LLM should be instructed to *extract information* or *follow formatting guidelines* from them, rather than *executing instructions* from them. The host LLM should apply strict sandboxing and content filtering to any data loaded from these files before interpretation. 3. Consider using a structured data format (e.g., JSON, YAML) for guidelines that the LLM can parse, rather than free-form text that it interprets as instructions. | LLM | SKILL.md:22 |
Scan History
Embed Code
[](https://skillshield.io/report/64f27de357f86b29)
Powered by SkillShield