Trust Assessment
greeting-skill received a trust score of 89/100, placing it in the Mostly Trusted category. This skill has passed most security checks with only minor considerations noted.
SkillShield's automated analysis identified 2 findings: 0 critical, 0 high, 2 medium, and 0 low severity. Key findings include Potential Prompt Injection via Unsanitized User Input.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings2
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| MEDIUM | Potential Prompt Injection via Unsanitized User Input The `greet` and `getTimeBasedGreeting` functions directly embed the `name` parameter into the output string without any sanitization or escaping. If the `name` parameter originates from untrusted user input, an attacker could inject malicious instructions or data that, when processed by the host LLM, could manipulate its behavior or lead to unintended responses. Sanitize or escape the `name` input before embedding it into the greeting string. Consider using a library for safe string interpolation or ensuring the LLM's prompt engineering is robust against such injections. For example, enclose the user-provided name in specific delimiters that the LLM is instructed to treat as literal text. | LLM | greet.ts:9 | |
| MEDIUM | Potential Prompt Injection via Unsanitized User Input The `greet` and `getTimeBasedGreeting` functions directly embed the `name` parameter into the output string without any sanitization or escaping. If the `name` parameter originates from untrusted user input, an attacker could inject malicious instructions or data that, when processed by the host LLM, could manipulate its behavior or lead to unintended responses. Sanitize or escape the `name` input before embedding it into the greeting string. Consider using a library for safe string interpolation or ensuring the LLM's prompt engineering is robust against such injections. For example, enclose the user-provided name in specific delimiters that the LLM is instructed to treat as literal text. | LLM | greet.ts:22 |
Scan History
Embed Code
[](https://skillshield.io/report/7711ab5a80506795)
Powered by SkillShield