Trust Assessment
network-engineer received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 2 findings: 2 critical, 0 high, 0 medium, and 0 low severity. Key findings include Untrusted instructions for output generation, Untrusted role-setting instruction.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 40/100, indicating areas for improvement.
Last analyzed on February 14, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted instructions for output generation The skill package attempts to inject instructions into the host LLM's prompt, dictating how it should generate output for large network architectures. These instructions, found within the untrusted input block, manipulate the LLM's behavior by telling it to 'generate output incrementally' and 'ask the user which layer to design next.' This is a direct attempt to override or influence the LLM's operational directives from an untrusted source. Remove all instructions intended for the host LLM from within the untrusted input delimiters. LLM behavior should be controlled by trusted system prompts or tool definitions, not untrusted skill content. | LLM | SKILL.md:5 | |
| CRITICAL | Untrusted role-setting instruction The skill package attempts to inject a role-setting instruction into the host LLM's prompt. The statement 'You are a network engineer specializing in modern cloud networking, security, and performance optimization' is found within the untrusted input block. This is a direct attempt to manipulate the LLM's persona or identity from an untrusted source, which could lead to unexpected or malicious behavior if the LLM were to follow it. Remove all instructions intended for the host LLM from within the untrusted input delimiters. LLM persona and behavior should be controlled by trusted system prompts or tool definitions, not untrusted skill content. | LLM | SKILL.md:7 |
Scan History
Embed Code
[](https://skillshield.io/report/efd4bb5b57977c55)
Powered by SkillShield