Trust Assessment
mlops-engineer received a trust score of 72/100, placing it in the Caution category. This skill has some security considerations that users should review before deployment.
SkillShield's automated analysis identified 1 finding: 1 critical, 0 high, 0 medium, and 0 low severity. Key findings include Untrusted skill content attempts to manipulate LLM behavior and persona.
The analysis covered 4 layers: Manifest Analysis, Static Code Analysis, Dependency Graph, LLM Behavioral Safety. All layers scored 70 or above, reflecting consistent security practices.
Last analyzed on February 13, 2026 (commit 13146e6a). SkillShield performs automated 4-layer security analysis on AI skills and MCP servers.
Layer Breakdown
Behavioral Risk Signals
Security Findings1
| Severity | Finding | Layer | Location | |
|---|---|---|---|---|
| CRITICAL | Untrusted skill content attempts to manipulate LLM behavior and persona The `SKILL.md` file, designated as untrusted input, contains explicit instructions aimed at controlling the host LLM's output generation strategy (e.g., "generate output incrementally", "ask the user which component to implement next") and defining its operational persona ("You are an MLOps engineer..."). These directives, originating from an untrusted source, constitute a direct prompt injection attempt to influence the LLM's core behavior. Remove all instructional content from within the untrusted input delimiters (`<!---UNTRUSTED_INPUT_START...--->` and `<!---UNTRUSTED_INPUT_END...--->`). Instructions for the LLM should be provided outside of these delimiters, or the skill's manifest should be used for configuration. | LLM | SKILL.md:5 |
Scan History
Embed Code
[](https://skillshield.io/report/a9f195258f4d2ada)
Powered by SkillShield