Trust Assessment
nestjs-expert received a trust score of 65/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 3 findings: 2 critical, 1 high, 0 medium, and 0 low severity. Key findings include Skill attempts to define its own operational logic and persona, Explicit shell command execution instructed within untrusted content, Instruction to execute arbitrary `npm run` scripts.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. The LLM Behavioral Safety layer scored lowest at 25/100, indicating areas for improvement.
Last analyzed on February 20, 2026 (commit e36d6fd3). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings3
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Skill attempts to define its own operational logic and persona The skill's `SKILL.md` file, treated as untrusted input, contains extensive instructions for the host LLM on how to operate. This includes defining its persona ('You are an expert...'), decision-making processes ('When invoked: 0. If a more specialized expert fits better...'), and specific steps for analysis and validation. This attempts to manipulate the host LLM's behavior and operational flow based on untrusted content, violating the principle that content within the untrusted delimiters should be treated as data, not instructions. Skill definitions and operational instructions for the host LLM should be provided outside the untrusted content delimiters. The untrusted content should only contain data or declarative information that the skill processes, not instructions for the host LLM's behavior. | LLM | SKILL.md:10 | |
| CRITICAL | Explicit shell command execution instructed within untrusted content The skill explicitly lists and instructs the host LLM to execute various shell commands under 'Detection commands', 'Diagnostic Tools', and 'Fix Validation' sections. These commands include `test`, `grep`, `find`, `xargs`, `npm run`, and `nest info`. If the host LLM is configured to execute these commands based on skill instructions, this represents a direct command injection vulnerability, allowing the skill to run arbitrary shell commands from untrusted input. Remove all explicit shell command instructions from the untrusted skill definition. If the skill requires interaction with the environment, it should use a secure, sandboxed API provided by the host LLM, rather than directly instructing shell command execution. | LLM | SKILL.md:101 | |
| HIGH | Instruction to execute arbitrary `npm run` scripts The skill instructs the host LLM to execute `npm run build`, `npm run test`, and `npm run test:e2e` commands. The `npm run` command executes scripts defined in the project's `package.json`. This grants the skill excessive permissions to execute arbitrary code within the target project's context, which could be malicious if the `package.json` is compromised or contains unintended side effects. This is a specific instance of command injection with elevated risk due to `npm run`'s nature. Avoid instructing the execution of `npm run` commands directly. If build or test validation is required, the host LLM should use a tightly controlled, sandboxed environment with explicit whitelisting of allowed commands and arguments, or rely on static analysis rather than dynamic execution. | LLM | SKILL.md:119 |
Scan History
Embed Code
[](https://skillshield.io/report/b8e15c5c29bf1908)
Powered by SkillShield